Say that, by some measurement technique, we have been able to obtain data that describes how a material property varies inside a representative volume of a material. By visualizing the data, we can identify several regions with distinct properties; for example, the pores and the solid regions in a block of porous material. But what do we do if the identified regions form highly irregular shapes that are not suitable for generating geometry in the COMSOL Multiphysics® software?
If we have access to the material properties in coordinates inside our representative volume, the shape of which we can easily draw, then we can use an interpolation function based on this data to define the material for our simulation, thus skipping the creation of the irregularly shaped geometry object altogether.
One example is in the PoreScale Flow tutorial model, where we solve the incompressible, stationary Brinkman equations on a rectangularshaped domain with material parameters based on an image function.
The image function representing the porous medium used in the PoreScale Flow tutorial. The color ranges from 0 to 1, where blue (0) represents the fluid and red (1) the solid.
In the regions marked with red, the material parameters are set to imitate a solid, and the material parameters for the fluid are defined in the blue region.
Let’s return to the previously discussed model of the human head, which we lofted based on curve data in different cross sections. Say we don’t have the curve data needed to loft the head or access to the Loft operation in the addon Design Module. Instead, we have a text file with the material properties defined in coordinates inside the region. Let’s start by introducing the tutorial where the geometry of the head is used.
Slice plot showing the electrical conductivity (S/m) for the air (dark blue) and the head.
The Specific Absorption Rate (SAR) in the Human Brain tutorial models the absorption of a radiated wave from an antenna into a human head, as well as the resulting increase in temperature. The patch antenna is placed on the left side of the head and is excited by a lumped port. The tutorial described in this blog post demonstrates how to set up the model without the geometry of the head and shows what modifications are needed to accomplish this.
Specific Absorption Rate (SAR) in the Human Brain tutorial model. The isosurfaces show the temperature increase, dT (K), as a result of the absorbed radiation from the antenna.
The goal is to import material data into interpolation functions that we can use to define the material properties for the computational domain, which will become much simpler. Without the head geometry, we only need the sphere, which represents both the head and surrounding air, and the domains for the perfectly matched layers (PMLs). We also keep the smaller block with added edges, which represents the patch antenna.
Geometry for the simulation without the geometry object for the head.
A text file with material data can be formatted in different ways. For this example, the Spreadsheet format is used. In such a text file, there is one column each for x, y, and zcoordinates followed by an arbitrary number of data columns. The columns in the text file of our model include:
x, y, z
k (W/(m*K))
epsr (1)
mur (1)
sigma (S/m)
omega_head (1/s)
If the text file contains NaN entries, it is usually recommended to replace them with zero, a large value, or a small value, depending on the material property, as it triggers solver errors if the functions interpolate to NaN in the calculation domains. We create one Interpolation function per data column in the file and add the units for the arguments as well as the function itself so that the units are correctly recognized in the physics. When all data columns have the same unit, it is possible to create all functions within the same Interpolation feature.
We create the Interpolation features in the Global Definitions section of the Model Builder, since we want the material property functions not only for defining the material properties for the physics but also for helping generate a finer mesh where the surface of the head would be. To do this, we use a Size Expression feature when generating the mesh, as discussed later in the blog post.
The Settings window for one of the interpolation functions. The number in the Position in file field corresponds to the number of the data column, starting from 1. In this image, sigma_int is taken from the fourth data column in the text file. The unit of the function can be entered in the bottom of the Settings window.
When geometry boundaries define the transition from one domain to the next, the mesh must always follow the shape of the faces that separate the domains. Here, there are no faces that represent the head, so we need to manually tell the mesh algorithm where it is important to refine the mesh. The mesh size can be about the same inside the head as outside the head, so it is most important to resolve the border between the two materials (head and air). In general, it is important to resolve a change between materials as that will typically impose a gradient in the fields. The larger the gradients, the finer the mesh must be to resolve the transition. For some applications, it is important to also resolve the mesh in the complete domain and not just at the borders between the different materials.
To refine the mesh along the shape of the head, we add two global variables based on the interpolation function k_int
, as that function is 0.5 W/(m*K) and almost zero in the air. The variable avMat
is 1 inside the head and onBnd
is 1 in the vicinity of the boundary:
avMat = (k_int(xd,y,z)/0.5+k_int(x+d,y,z)/0.5+k_int(x,yd,z)/0.5+k_int(x,y+d,z)/0.5+k_int(x,y,zd)/0.5+k_int(x,y,z+d)/0.5)/6 onBnd = (avMat>0.01)*(avMat<0.99)
The parameter d
is equal to the fine mesh size that is used in the Size Expression node and defines the thickness of the area where the mesh is refined (highlighted in red in the following image).
A slice plot of the onBnd variable. The values range from zero (blue) to 1 (red). The mesh is refined in the region shown in red using a mesh Size Expression.
Under the Free Tetrahedral node in the Mesh sequence, a Size Expression feature node is added to specify an expression that determines the mesh size to use. Note that the expression should evaluate to the size you want to use for your mesh (in meters if you are using SI units).
We use the default option, Evaluate on: Grid. This means that the expression is evaluated on a regular grid and interpolated between the grid points. This grid is not visible in the model but merely used to evaluate the size expression. When using this setting, it is important that all variables and functions used in the expression are defined under Global Definitions. We also increase the setting Number of cells per dimension to 50 (the default value is 25) to better resolve the region with a finer regular evaluation grid. Using any of the other options in the Evaluate on list gives the possibility to manually control the mesh on which the evaluation is done. The Size expression we use is defined as
onBnd*Fine+!onBnd*Coarse
where the two parameters, Fine and Coarse, are defined as 0.007 m and 0.07 m, respectively. When they are defined as parameters, they can be easily changed manually or varied in a parameter sweep by the Parametric Sweep study. There is a plot group called Mesh plot in the tutorial model that shows a cross section of the mesh used for the calculations. You can learn more in a previous blog post on visualizing mesh in greater detail.
A mesh plot using a mesh Filter, plotting the mesh element size for x < 0. This image shows a much more refined mesh to make the plot clear. The tutorial model available for download uses a coarser mesh to save memory and time.
As mentioned above, we will only discuss the differences in setting up the materials and physics compared to the original tutorial. You can get details about the materials and physics used in the Specific Absorption Rate (SAR) in the Human Brain tutorial model. We keep Material 1 as the material for the antenna. A new material is added for the rest of the domains (the air and head). The material properties are defined using the interpolation functions k_int(x,y,z)
, epsr_int(x,y,z)
, mur_int(x,y,z)
, and sigma_int(x,y,z)
.
We select domain 5 (the air and head domain) to be active in the Bioheat Transfer physics interface. In the Bioheat subnode, two of the material properties for the blood are defined using the global variable avMat
to make sure the value of the material properties is zero outside the head. For example, Specific heat, blood is defined as c_blood*avMat
, where c_blood
is a parameter. The Blood perfusion rate is set to the interpolation function omega_blood(x,y,z)
. We are now done with our modifications and are ready to compare the results after solving the model.
As seen in the images showing the SAR value, the radiation is absorbed in a similar manner in the two models. You can see the shape of the head in the slices, even though the edges of the slices are rather rough compared to the original model with the geometry object for the head. Manual color and data ranges are used for the plot in this tutorial, using interpolation functions to filter out the shape of the head.
Logscale slice plot of the local SAR value. The left image shows the original tutorial and the right image shows the results from the model using interpolated materials.
We use a Volume maximum feature to calculate the maximum temperature rise in the head. This value is about 0.15 K in the original model and about 0.17 K in the model with the interpolated materials.
The two main components to get adequate results are:
It is possible to have material data that is much more resolved than the actual calculation mesh, but it will not help to make a finer calculation mesh if the material data isn't resolved enough to match the calculation mesh. While a fine calculation mesh requires more memory and time to solve, betterresolved material data usually just takes more time to solve, as there is more data to interpolate from.
There is also another source of error: the fact that we are solving for bioheat transfer in the air with a small, nonzero thermal conductivity. However, this shouldn't influence the results too much. The main sources of errors are the resolution of the material data and the calculation mesh.
In this blog post, we have shown how to set up a model with coordinatedependent material properties defined in a text file, as well as how to adapt the mesh size to accurately resolve the border between two materials. We also discussed the sources of errors coming into play and how to improve the accuracy of the results. In a modeling scenario where we cannot generate the geometry of highly irregularly shaped objects, this approach can be a real lifesaver.
Try the tutorial model featured in this blog post by clicking the button below. From the Application Gallery, you can log into your COMSOL Access account and download the MPHfile.
The most common type of HPC hardware is a cluster; a bunch of individual computers (often called nodes) connected by a network. Even if there is only one dedicated simulation machine, you can think of it as a onenode cluster.
The COMSOL Reference Manual also calls a single COMSOL Multiphysics process a node. The difference is rarely important, but when it does matter, we will call a computer a physical node or host and an instance of the COMSOL Multiphysics program a compute node or process.
An example of a cluster with four compute nodes.
The work that we want to perform on the cluster is bundled into atomic units, called jobs, that are submitted to the cluster. A job in this context is a study being run with COMSOL Multiphysics.
When you submit a job to a cluster, the cluster does two things:
These tasks are performed by special programs called schedulers and resource managers, respectively. Here, we use the term scheduler interchangeably for both terms, since most programs perform both tasks at once, anyway.
Note that it is possible to submit COMSOL Multiphysics jobs to a cluster using the comsol batch
command (on the Linux® operating system) or comsolbatch.exe
(on the Windows® operating system) in a script that you submit to the cluster. You might prefer this method if you’re already familiar with consolebased access to your cluster. For additional information, please see the COMSOL Knowledge Base article “Running COMSOL® in parallel on clusters“.
In the following sections, we will discuss using the Cluster Computing node to submit and monitor cluster jobs from the COMSOL Desktop® graphical interface.
Whenever I want to configure the Cluster Computing node for a cluster that I am not familiar with yet, I like to start with a simple busbar model. This model solves in a few seconds and is available with any license, which makes testing the cluster computing functionality very easy.
To run the busbar model on a cluster, we add the Cluster Computing node to the main study. We might need to enable Advanced Study Options first, though. To do so, we activate the option in Preferences or click the Show button in the Model Builder toolbar.
Activate Advanced Study Options to enable the Cluster Computing node.
Now the Cluster Computing node can be added to any study by rightclicking the study and selecting Cluster Computing.
Rightclick a study node and select Cluster Computing from the menu to add it to the model.
The default settings for the Cluster Computing node.
If you can’t find the Cluster Computing node, chances are your license is not cluster enabled (such as CPU licenses and academic class kit licenses). In this case, you can contact your sales representative to discuss licensing options.
The most complex part of using the Cluster Computing node is finding the right settings and using it for the first time. Once the node works on your cluster for one model, it is very straightforward to adjust the settings slightly for other simulations.
To store the settings as defaults, you can change the settings under Preferences in the sections Multicore and Cluster Computing and Remote Computing. Alternatively, you can apply the default settings to the Cluster Computing node directly and click the Save icon at the top of the Settings window. It is highly recommended to store the settings as default settings either way, so you do not have to type everything again for the next model.
Discussing all of the possible settings for the Cluster Computing node is out of scope of this blog post, so we will focus on a typical setup. The COMSOL Multiphysics Reference Manual contains additional information. In this blog post, the following is assumed:
These settings are shown in this screenshot:
First, let’s talk about the section labeled Cluster computing settings. Since our cluster uses SLURM® software as its scheduler, we set the Scheduler type to “SLURM”. The following options are SLURM®specific:
On the machine used in this example, we have two queues: “cluster” for jobs of up to 10 physical compute nodes with 64 GB of RAM each and “fatnode” for a single node with 256 GB. Every cluster will have different queues, so ask your cluster administrator what queues to use.
The next field is labeled “Directory”. This is where the solved COMSOL Multiphysics files go on a local computer when the job is finished. This is also where the COMSOL® software will store any intermediate, status, and log files.
The next three fields specify locations on the cluster. Notice that Directory was a Windows® path (since we are working on a Windows® computer here), but these are Linux® paths (since our cluster uses Linux®). Make sure that the kind of path matches the operating system on the local and remote side!
The Server Directory specifies where files should be stored when using cluster computing from a COMSOL Multiphysics session in clientserver mode. When executing cluster computing from a local machine, this setting is not used, so we leave it blank. We do need the external COMSOL batch directory, however. This is where model files, status files, and log files should be kept on the cluster during the simulation. For these paths, be sure to choose a directory that already exists where you have write permissions; for example, some place in your home directory. (See this previous blog post on using clientserver mode for more details.)
The COMSOL installation directory is selfexplanatory and should contain the folders bin
, applications
, and so on. This is usually something like “/usr/local/comsol/v53a/multiphysics/” by default, but it obviously depends on where COMSOL Multiphysics is installed on the cluster.
Remote connection settings.
The next important section is the Remote and Cloud Access tab. This is where we specify how to establish the connection between the local computer and remote cluster.
To connect from a Windows® workstation to a Linux® cluster, we need the thirdparty program PuTTY to act as the SSH client for the COMSOL® software. Make sure to have PuTTY installed and that you can connect to your cluster with it. Also, make sure that you set up passwordfree authentication with a publicprivate key pair. There are many tutorials online on how to do this and your cluster administrator can help you. When this is done, enter the installation directory of PuTTY as the SSH directory and your private key file from the passwordfree authentication in the SSH key file. Set the SSH user to your login name on the cluster.
While SSH is used to log in to the cluster and run commands, SCP is used for file transfer, for example, when transferring model files to or from the cluster. PuTTY uses the same settings for SCP and SSH, so just copy the settings from SSH.
Lastly, enter the address of the cluster under Remote hosts. This may be a host name or an IP address. Remember to also set the Remote OS to the correct operating system on the cluster.
When you are done, you can click the Save icon at the top of the Settings window to start with these settings next time you want to run a remote cluster job.
Another possibility to test whether your cluster settings work is to use the Cluster Setup Validation app, available as of COMSOL Multiphysics version 5.3a.
The settings that change every time you run a study include the model name and the number of physical nodes to use. When you click to run the study, COMSOL Multiphysics begins the process of submitting the job to the cluster. The first step is invisible and involves running SCP to copy the model file to the cluster. The second step is starting the simulation by submitting a job to the scheduler. Once this stage starts, the External Process window automatically appears and informs you of the progress of your simulation on the cluster. During this stage, the COMSOL Desktop® is locked and the software is busy tracking the remote job.
Tracking the progress of the remote job in the External Process window from scheduling the job (top) to Done (bottom).
This process is very similar to how the Batch Sweep node works. In fact, you may recognize the External Process window from using the batch sweep functionality. Just like when using a batch sweep, we can regain control of the GUI by clicking the Detach Job button below the External Process window, to detach the GUI from the remote job. We can later reattach to the same job by clicking the Attach job button, which replaces the Detach job button when we are detached.
Normally, running COMSOL Multiphysics on two machines simultaneously requires two license seats, but you can check the Use batch license option to detach from a remote job and keep editing locally with only one license seat. In fact, you can even submit multiple jobs to the cluster and run them simultaneously, as long as both jobs are just variations of the same model file; i.e., they only differ in their global parameter values. The only restriction is that your local username needs to be identical to the username on the remote cluster so the license manager can tell that the same person is using both licenses. Otherwise, an extra license seat will be consumed, even when the Use batch license option is enabled.
As soon as the simulation is done, you are prompted to open the resulting file:
Once the cluster job has finished, you are prompted to immediately open the solved file.
If you select No, you can still open the file later, because it will have already been downloaded and copied to the directory that was specified in the settings. Let’s have a look at these files:
Files created during the cluster job on the local side.
These files are created and updated as the simulation progresses. COMSOL Multiphysics periodically retrieves each file from the remote cluster to update the status in the Progress window and informs you as soon as the simulation is done. The same files are also present on the remote side:
Files created during the cluster job on the remote side. Note: Colors have been changed from the default color scheme in PuTTY to emphasize MPHfiles.
Here is a rundown of the most relevant file types:
File  Remote Side  Local Side 

backup*.mph  N/A 

* .mph 


*.mph.log 


*.mph.recovery 


*.mph.status 


*.mph.host  N/A 

The busbar model, being so small, is not something that we would want to realistically run on a cluster. After using that example to test the functionality, we can open up any model file, add the Cluster Computing node (populated with the defaults we set before), change the number of nodes and filename, and click Compute. The Run remote options, scheduler type, and all of the associated settings don’t need to be changed again.
What does the COMSOL® software do when we run a model on multiple hosts? How is the work split up? Most algorithms in the software are parallelized, meaning the COMSOL Multiphysics processes on all hosts work together on the same computation. Distributing the work over multiple computers provides more computing resources and can increase performance for many problems.
However, it should be noted that the required communication between cluster nodes can produce a performance bottleneck. How fast the model will solve depends a lot on the model itself, the solver configuration, the quality of the network, and many other factors. You can find more information in this blog series on hybrid modeling.
Another reason to use the hardware power of a cluster is that the total memory that a simulation needs stays approximately constant, but there is more memory among all of the hosts, so the memory needed per host goes down. This allows us to run really large models that would not otherwise be possible to solve on a single computer. In practice, the total memory consumption of the problem goes up slightly, since the COMSOL Multiphysics processes need to track their own data as well as the data they receive from each other (usually much less). Also, the exact amount of memory a process will need is often not predictable, so adding more processes can increase the risk that a single physical node will run out of memory and abort the simulation.
A much easier case is running a distributed parametric sweep. We can speed up the computation by using multiple COMSOL Multiphysics processes and having each work on a different parameter value. We call this type of problem “embarrassingly parallel”, since the nodes do not need to exchange information across the network at all while solving. In this case, if the number of physical nodes is doubled, then ideally the simulation time will be cut in half. The actual speedup is typically not quite this good, as it takes some time to send the model to each node and additional time to copy the results back.
To run a distributed parametric sweep, we need to activate the Distribute parametric sweep option at the bottom of the settings for the parametric sweep. Otherwise, the simulation will run one parameter at a time using all of the cluster nodes, with the parallelization performed on the level of the solver, which is much less efficient.
If you run an auxiliary sweep, you can also check the Distribute parametric solver option in the study step, for example, to run a frequency sweep over many frequencies in parallel using multiple processes on potentially many physical nodes. Note that if you use a continuation method, or if individual simulations depend on each other, then this method of distributing the parameters does not work.
Note: Do not use the Distribute parametric sweep option in the Cluster Computing node itself, as it has been depreciated. It is better to specify this at the parametric sweep directly.
Activate the Distribute parametric sweep option to run each set of parameters on a different node in parallel.
To run a sweep in parallel, we can also use the Cluster Sweep node, which combines the features of the Batch Sweep node with the ability of the Cluster Computing node to run jobs remotely. You can say that a cluster sweep is the remote version of the batch sweep, just like the Cluster Computing node is the remote version of the Batch node. We will discuss cluster sweeps in more detail in a future blog post.
The most important difference to remember is that the Cluster Computing node submits one job for the entire study (even if it contains a sweep), while the Cluster Sweep and Batch Sweep nodes create one job for each set of parameter values.
All of what is covered in this blog post is also available from simulation apps that are run from either COMSOL Multiphysics or COMSOL Server™. An app simply inherits the cluster settings from the model on which it is based.
When running apps from COMSOL Server™, you get access to cluster preferences in the administration web page of COMSOL Server™. You can let your app use these preferences to have the cluster settings hardwired and customized for a particular app. If you wish, you can design your apps so that the user of the app gets access to one or more of the lowlevel cluster settings. For example, in your app’s user interface, you can design a menu or list where users can select between different queues, such as the “cluster” or “fatnode” options mentioned earlier.
Whether you are using a university cluster, a virtual cloud environment, or your own hardware, the Cluster Computing node enables you to easily run your simulations remotely. You don’t usually need an expensive setup for this purpose. In fact, sometimes all you need is a Beowulf cluster for running parametric sweeps while you take care of other tasks locally.
Cluster computing is a powerful tool to speed up your simulations, study detailed and realistic devices, and ultimately help you with your research and development goals.
SLURM is a registered trademark of SchedMD LLC.
Linux is a registered trademark of Linus Torvalds in the U.S. and other countries.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
]]>Topology optimization helps engineers design applications in an optimized manner with respect to certain a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.
A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar largescale geometries.
The governing equation is the standard wave equation with material parameters given in terms of the density and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, . This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.
Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.
Using this approach will work for applications where the socalled thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape.
For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the socalled thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.
Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.
An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾revolution plot.
The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.
Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized NavierStokes equations with quiescent conditions. These equations are implemented in the Thermoviscous Acoustics physics interfaces available in the Acoustics Module addon to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as
where the viscous field is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.
In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as
where the thermal field is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.
As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.
For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no oneequation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.
For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the socalled Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as
(1)
where is the Laplacian in the crosssectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.
In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are
and
These boundary conditions give us insight into how to perform the optimization procedure, since an airsolid interface could be represented by the former boundary condition and an airair interface by the latter. We write the governing equation in a more general matter:
We already know that for air domains, (a_{v},f_{v}) = (1,1), since that gives us the original equation (1). If we instead set a_{v} to a large value so that the gradient term becomes insignificant, and set f_{v} to zero, we get
This corresponds exactly to the boundary condition for noslip boundaries, just as a solidair interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a_{v},f_{v}) should have values of (“large”,0). Thus, we have established our interpolation extremes:
and
I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a_{v} and f_{v} are input.
Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.
The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.
Figure 4: The resulting field with contours for the setup in Figure 3.
The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.
Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:
Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.
A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.
Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.
The normalized acoustic loss for the initial hexagonal geometry and the topologyoptimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.
Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topologyoptimized geometry (solid), respectively.
For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.
This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN ReSound A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN ReSound as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
]]>
Born in 1707 in Basel, Switzerland, Leonhard Euler (pronounced “oiler”) was a prolific mathematician who published more than 800 articles during his lifetime. He studied under the famous Johann Bernoulli and received his master’s degree in philosophy from the University of Basel. Before moving to St. Petersburg, Russia, to work at the university, Euler submitted his first paper to the Paris Academy of Sciences, coming in second place at only 19 years old.
A portrait of Leonhard Euler. Image in the public domain, via Wikimedia Commons.
Euler quickly rose through the academic ranks and in 1733 succeeded Bernoulli as the chair of mathematics in St. Petersburg. Euler moved to Berlin in 1741 at the invitation of King Frederick II. In his 25 years there, he wrote around 380 articles and the first volume of his seminal book Introductio in Analysin Infinitorum, which formally defined functions for the first time; introduced the notation; popularized the and notation; and established the critical formula .
JosephLouis Lagrange (pronounced “luhgronj”) was born Giuseppe Lodovico Lagrangia in Turin. Today, this city is the capital of the region of Piedmont in Italy, but when Lagrange was born in 1736, it was ruled by the Duke of Savoy as part of the Kingdom of Sardinia. Lagrange developed an interest in mathematics and, after working independently on novel topics, began corresponding with Euler, whom he succeeded when Euler left Berlin.
A portrait of JosephLouis Lagrange. Image in the public domain, via Wikimedia Commons.
In Berlin, Lagrange developed most of the mathematics for which he is famous today. He played an important role in the development of variational calculus and came up with the Lagrangian approach to mechanics. Although Lagrangian mechanics makes the same predictions as Newton’s laws of motion, the Lagrangian functional introduced by Lagrange allows the classical mechanics of many problems to be described in a mathematically more straightforward and insightful manner than in Newtonian mechanics. Lagrange also developed the method of Lagrange multipliers, which allows constraints on systems of equations to be introduced easily in a variational approach.
The mathematical formulations of Euler and Lagrange are fundamental to the finite element method, which is used to solve equations in COMSOL Multiphysics.
In the Eulerian method, the dynamics of a system are considered from the viewpoint of an observer measuring the system’s evolution with respect to a fixed system of coordinates. This coordinate system is called the spatial frame in COMSOL Multiphysics. It could be understood to correspond to the laboratory frame in physical analysis, since the system of coordinates is oriented according to a fixed set of axes without any reference to the orientation of the components of the physical system itself.
The figure below illustrates a thin plate of material whose structural mechanics are modeled in a 2D plane. The plate is fixed to a rigid wall at the lefthand side and is deformed under its own weight, as gravity acts downward. With the results plotted in the spatial frame, we see the deformation of the object, as we would expect to observe in the laboratory.
A thin plate fixed to the gray block at the left deforms under its own weight, as viewed in the spatial (lab) frame. The deflection at the tip is about 5 mm for the given mechanical properties.
Formulating physical equations seems very natural in the Eulerian method. Indeed, this is the common formulation for problems such as electromagnetics and fluid physics, in which the field variables are expressed as functions of the fixed coordinates in the spatial frame.
For mechanical problems, though, the Lagrangian method offers a helpful alternative. In the Lagrangian method, the mechanical equations are written with reference to small individual volumes of the material, which will move within an object as it displaces or deforms dynamically. To put it another way, the object itself always appears undeformed from the point of view of the Lagrangian coordinate system, since the latter stays attached to the deforming object and moves with it, but external forces in the surroundings appear to change their orientation from the deforming object’s perspective. The corresponding coordinate system, which moves along with the deforming object, is called the material frame in COMSOL Multiphysics.
A point within the object, as measured in the spatial frame, is displaced from the position of the same point as expressed in the material frame by the mechanical displacement of that point. In the image below, we focus our view on the tip of the deforming plate in the example above and animate its deformation as the density of the object increases so that the weight increases too. As you can see, the material frame coordinate system (red grid and arrows) deforms together with the object, as the object’s dimensions in the spatial frame change. This means that anisotropic material properties — such as mechanical properties of composite materials — can be expressed conveniently in the material frame.
Zoomedin view of the tip of a thin plate deforming under its own weight, as its density is increased. The red grid denotes the material frame coordinates, tied to the object, as viewed in the spatial (lab) frame. The red and green arrows show the x and ycoordinate orientations of the material frame, as viewed in the spatial frame.
In the limit of very small strains for this type of mechanical problem, the spatial and material frames are nearly coincident, because the mechanical displacement is small compared to the object’s size. In this case, it is common to use the “engineering strain” to define the elastic stressstrain relation for the object, and the resulting stressstrain equations are linear. As the mechanical displacement increases, though, the linear approximation used to evaluate the engineering strain is increasingly inaccurate — the exact GreenLagrange strain is required. In COMSOL Multiphysics, the term “geometric nonlinearity” means that the GreenLagrange strain is used.
For further details on the mathematics, see my colleague Henrik Sönnerlind’s blog post on geometric nonlinearity.
Geometric nonlinearity is handled in COMSOL Multiphysics by allowing the spatial frame to be separated from the material frame, according to a frame transformation due to the computed mechanical displacement. It remains convenient to access the material frame to express properties such as anisotropic mechanical material properties, since these properties will usually remain aligned with the material frame coordinates, even as the object deforms.
By contrast, external forces such as gravity have a fixed orientation in the spatial frame. From the perspective of the material frame, external forces like gravity change direction as the object deforms. The image below shows the tip of the thin plate as above, but here, the displacement magnitude is plotted with colors. Arrows are used to illustrate the force due to gravity, as expressed in the material frame coordinates. Since the material frame coordinates remain fixed with respect to the object, the dimensions of the object appear not to change. However, the displacement magnitude increases with the object’s weight and the gravity force increasingly changes direction with respect to the deformed material in conditions of greater deformation.
Zoomedin view of the tip of a thin plate deforming under its own weight as its density increases. The plot is in the material frame as used for the Lagrangian formulation, so the deformation is not apparent, although displacement increases. The red arrows indicate the apparent direction of gravity (which is constant in the spatial frame) as perceived from the material frame of reference within the deforming object.
Neither the Lagrangian nor Eulerian formulation is more “physical” or “correct” than the other. They are simply different mathematical approaches to describing the same phenomena and equations. Through coordinate transformation, we can always transform the physical equations for any phenomenon from the material frame to the spatial frame or vice versa. From the perspective of interpretation and implementation, though, each approach has certain advantages and common applications. Some of these are summarized in the table below:
Strengths  Common Applications  

Eulerian Method 


Lagrangian Method 


What about multiphysics problems, such as fluidstructure interaction (FSI) or geometrically nonlinear electromechanics? In these cases, one physical equation might be formulated most naturally with the Eulerian method, while another might be better expressed with the Lagrangian method. This is where the ALE method comes in. This method solves the equations on a third coordinate system, which is not required to match either the spatial frame or the material frame coordinate systems.
The third coordinate system is called the mesh frame in COMSOL Multiphysics. There is one mathematical mapping between the spatial frame and the underlying mesh frame, and one between the material frame and the underlying mesh frame, so at all points in time, the equations formulated in the spatial and material frames can be transformed into the mesh frame to be solved.
In domains representing solids in a model, mechanical displacement is predicted using structural mechanics equations in the Lagrangian formulation. Here, the relation of the spatial and material frames is given by the mechanical displacement, as above. The ALE method adds more equations to allow the apparent positions and shapes of mesh elements in neighboring domains to displace in the spatial frame. That is in order to account for how mechanical deformation can change the shape of the boundaries of any domain where the physics are described in the Eulerian formulation. These additional equations are called a Moving Mesh or Deformed Geometry in COMSOL Multiphysics.
At boundaries between Lagrangian and Eulerian domains, a boundary condition for these additional equations requires that the displacement of the spatial frame (as defined through the moving mesh) for the Eulerian domain must match the mechanical displacement of the spatial frame away from the material frame in the Lagrangian domain. Even where no mechanical equations are solved, such that no Lagrangian method is used, the ALE method can still be used to express moving boundaries due to deposition or loss of material.
If you find the ALE method quite mathematical, that’s OK! It’s a difficult concept to follow in the abstract. To better understand the way the ALE method works, let’s take a look at an example within COMSOL Multiphysics.
The ALE method plays an important role in modeling FSI. In COMSOL Multiphysics, this method enables the automated bidirectional coupling of fluid flow and structural deformation, a capability demonstrated in our Micropump Mechanism tutorial model.
At the heart of this micropump mechanism are two cantilevers, which perform the same function as valves in conventional pumping devices. These cantilevers are flexible enough that the fluid flow causes them to deform. As fluid is alternately pumped into or out of the channel at the top, the force of the fluid flow causes the two cantilevers to deform so that fluid flows out to the right or in from the left.
The micropump mechanism. Pumping fluid into or out of the top tube produces opposite reactions in the two cantilevers, pushing fluid in or out of the chamber. Even though there is no timeaveraged net flow into the upper tube, there is a timeaveraged net movement of fluid from left to right.
The cantilevers deform enough that there is an appreciable change in the position of the boundary where the fluid and solid meet: a geometrically nonlinear case. The selfconsistent handling of the fluid’s pressure on the solid and the solid’s force on the fluid, together with the deformation of the mesh, are handled automatically by the FluidStructure Interaction interface. The interface employs the ALE method to account for the change in shape in the solid and fluid regions.
For solids, the mechanical equations with geometric nonlinearity define the displacement of the spatial frame with respect to the material frame. In the fluid equations, it’s necessary to deform the mesh on which the equations are solved in order to express the displacement of the solid boundaries in the spatial frame where the fluid equations are formulated. The deformation at the boundaries is controlled by the mechanical displacement from the solution to the structural problem. Within the fluid, though, the exact position or orientation of mesh nodes isn’t important, as the equations are formulated in the fixed spatial frame. Instead, the deformation of the mesh is smoothed in order to ensure that the numerical problem remains stable with highquality mesh elements.
To explain the ALE method for the FSI problem, we could paraphrase a common explanation for general relativity: forces due to fluid flow (Eulerian) tell the structure how to deform in the material frame (Lagrangian), while the structural deformation (Lagrangian) tells the mesh how to move in the spatial frame (Eulerian).
Top: The micropump’s operation, including pressure, flow, and cantilever deformation, as plotted in the spatial frame. Bottom: Mesh deformations calculated by the ALE method.
As of COMSOL Multiphysics version 5.3a, the Moving Mesh feature to define mesh deformation in this type of problem is located under Component > Definitions. This allows consistency in the definition of material and spatial frames between all physics included in a model, even if several physics interfaces are included. The screen capture below shows where these settings are located in the COMSOL Multiphysics Model Builder tree.
Screen capture showing Moving Mesh features under Component > Definitions, and physical coupling between two physics interfaces through Multiphysics > FluidStructure Interaction.
Turning to an electrochemical problem, the Copper Deposition in a Trench tutorial model shows that the ALE method can be vital for simulating electrodeposition problems. In this model, copper is deposited onto a circuit board that has a small “trench”. The deposited copper layer becomes thick compared to the overall size of the trench, so the size and orientation of the copper surface change appreciably as deposition proceeds. Since the rate of copper deposition at different points on this surface is nonuniform, the shape and movement of the boundary cannot be neglected.
A schematic of the physical problem being solved in the electrodeposition model.
To calculate the rate of deposition at a given point on the copper electrodeelectrolyte interface, we need the concentration of the species and the electrolyte potential of the solution adjacent to that point. As the deposition progresses and the boundary moves, the shape of the electrolyte volume has to change continuously. Similarly, the concentration and potential distributions on the altered shape must be recalculated.
The coupling of the deposition rate to the boundary motion rate and the calculation of the changing shape are accomplished with the ALE method and fully automated multiphysics couplings with the Tertiary Current Distribution and Deformed Geometry interfaces. Here, the Deformed Geometry displaces the copper surface in the spatial frame at a rate proportional to the local current density for electrodeposition, as computed from the electrochemical interface.
With this model, we can accurately account for the deposition process in order to optimize its parameters. We can also experiment with different applied potentials and deposition surface geometries to improve the uniformity of the deposition, which produces a more efficient process and a higherquality end product.
Animations showing the evolution of the deposition process in time. It is clear that the deposition happens unevenly, resulting in a pinching of the trench opening at its top.
Thermal ablation, discussed in this previous blog post, involves a very high temperature applied to an object, causing the surface to melt and vaporize. Examples of thermal ablation include the removal of material by lasers — such as in the etching process, laser drilling, or laser eye surgery — and a spacecraft’s heat shield as it reenters the atmosphere.
Animation showing the effect of thermal ablation on a material.
Since we expect that an object’s shape will change when some of its material is removed, deforming meshes are clearly a key part of thermal ablation simulation. What we need to know is how the shape of the object will change. This depends on how we balance the applied heat with heat lost to ablation and heat dissipation throughout the structure by mechanisms such as conduction.
To obtain this information, we can predict the temperature profile as a function of space and time by solving the heat transfer equations using the Heat Transfer interface. Because the mass and shape of the object are changing, the Heat Transfer interface is coupled to a Deformed Geometry interface, using the ALE method to displace the boundary according to the rate of ablation. The Heat Transfer equations predict the temperature distribution in the object as its shape evolves.
By performing these steps, we can attain accurate calculations for the thermal ablation process. Moreover, we can determine the final shape of the object after ablation is complete. This might enable us to check whether a laser weld will fall within acceptable tolerances or whether a spacecraft will survive an emergency landing.
The contributions of Leonhard Euler and JosephLouis Lagrange in the field of mathematics have paved the way for simulating a variety of systems involving multiphysics applications. The combination of their individual methods has led to the development of the ALE method, which can be used to predict physical behavior when objects deform or displace. By properly accounting for these movements, you can set up highly accurate models. Remember to thank Euler and Lagrange as you investigate these and other models that exploit the ALE method!
The ALE method is one of many builtin physics capabilities in the COMSOL Multiphysics® software. See more of them:
As children, many of us took eye examinations to determine if we had CVD. These tests can involve showing the subjects pseudoisochromatic plates, which are circles made up of dots of different colors and sizes. The ability to differentiate between the dots and see a number in the middle of the plate indicates that a person does not have CVD.
An illustrative example of a color vision test. The numbers included are 5 and 3.
However, for the 1 in 12 men and 1 in 200 women with CVD, these symbols are hidden in a colorful camouflage. This is because CVD makes it difficult to distinguish between particular colors.
In the scientific community, this poses a problem because color tables are often used to help visualize results and present data in a way that is meant to be easily understood. To do so, color tables use arrays of colors in a predefined order, with each color representing a different value. Some color tables, such as rainbow (the default color table in many different software), use a wide range of colors. For engineers with CVD, the colors used in these tables can cause data misinterpretations, possibly obscuring key results and findings.
For example, take a look at the images below. These show that results visualized with a rainbow color table (left) can appear completely different to a person with CVD (middle). In this case, which simulates redgreen CVD or deuteranopia, the bright red from the rainbow color table would be interpreted as a darker grayyellow by engineers with CVD, which could result in data misinterpretation. One solution to this problem would be to generate a new color table so that the results can be interpreted correctly by engineers with CVD (right).
Different visualizations of a mixer model.
Rainbow color tables are not only problematic for people with CVD; they pose issues for people without CVD, too. Jamie Nuñez, one member of a research team from the Pacific Northwest National Laboratory (PNNL), explained that rainbow color tables introduce artifacts due to their uneven change between colors and their lack of a ramp in lightness (i.e., lightness steadily increases from one end of the color table to the other). This can cause the appearance of significant (or insignificant) regions when the opposite is true.
Nuñez also noted that although we can compare different regions by including a color table next to an image, unnecessarily complex color tables just slow down interpretation and can lead to incorrect conclusions.
In addition, despite the prevalence of tests for CVD, it is possible to have a CVD and be unaware of it. This happens because people learn what colors certain objects are supposed to be from an early age. And regardless of if someone is actually seeing the same color as another person, they will call it the same thing. This means that whether we know it or not, we may be incorrectly perceiving the colors used in the results of simulation and engineering projects.
Due to these issues, Nuñez and her fellow PNNL researchers Dr. Ryan Renslow and Dr. Christopher Anderton came to a realization: There has long been a need to move away from rainbow color tables. Therefore, the team decided to make an optimized color table that could be used throughout the scientific community.
To create an optimized color table for engineers with CVD, one option is to use grayscale. However, presenting results as grayscale images comes with its own set of issues. Namely, people have more difficulty distinguishing between different shades of gray and are less able to observe subtle changes when these tables are used.
Instead, the PNNL team created Cividis, a color table that is optimized with CVDs in mind.
The Cividis color table helps people with CVD accurately interpret simulation results, such as the sound pressure level of this loudspeaker.
For the PNNL team, the underlying goal of Cividis was to create a color table that optimizes the viewing of scalar data for people with and without CVD. In essence, the color table should have the highest accurate representation of data for the maximum number of people possible.
Achieving these goals wasn’t easy, as the team had to develop code to optimize color tables, which Nuñez mentioned was their greatest challenge. While the team knew its goals and the steps involved, actually understanding how to accomplish them without having to manually tweak anything was challenging. In addition, gathering and interpreting relevant color theory information took quite a bit of work.
In the end, the team overcame these challenges and created Cividis by optimizing the Viridis color table, which is seen as the current gold standard of color tables, but is not optimized for those with CVD. Cividis includes various shades of blue and yellow to create a userfriendly color table for people with and without CVD. What’s more, the PNNL team decided to share Cividis with COMSOL so that users of the COMSOL Multiphysics® software can easily access it for their own simulations. This color table is now available as of version 5.3a of COMSOL Multiphysics.
The temperature profile in a heat sink (right color legend) and in the air around the heat sink (left color legend). This model was visualized using a rainbow color table.
This image depicts how people with deuteranopia perceive the rainbow color table results.
In this version of the model, the results are visualized using the Cividis color table. By swapping out the rainbow color table (top) for Cividis, engineers with CVD can more easily analyze the temperature field and avoid potential data interpretation artifacts.
According to the PNNL team, Cividis offers three primary advantages to users of COMSOL Multiphysics.
First of all, it provides a perceptually uniform change in color and constant ramp in lightness. This means that the colors in Cividis change smoothly over the color table, with brighter colors representing higher values and vice versa. This is beneficial, Nuñez explains, as Cividis is very intuitive for how the different colors within the color table compare to each other. That makes it easy for others to understand how different values in an image compare and allows truly significant regions to stand out.
Additionally, the wide range of colors used in Cividis prevents the issues that come with using a grayscale. Finally, although Cividis has been specifically tested for use with redgreen color deficiencies, the most common CVD, it can be used by engineers with and without CVD. This is because Cividis looks the same to engineers with normal color vision, a deuteranomaly, or deuteranopia.
Using Cividis to model a Karmanvortex street behind a sphere subjected to a flow.
While we at COMSOL find the Cividis color scheme to be aesthetically pleasing (perhaps reminiscent of the colors of a moonlit sky), some testers found the color table to be unattractive due to a lack of color changes. To tackle this issue, the PNNL team plans to use the tools they created to optimize Cividis and create another optimized color table that cycles through more colors while remaining optimal for varying severities of deuteranomaly…so, stay tuned!
Moving forward, the team feels that in order for the scientific community as a whole to shift toward the use of optimized color tables such as Cividis, it needs to be easy to understand their importance and add them to software. This is why the PNNL team shared Cividis with COMSOL. They also plan on making all of their materials free and widely available. Nuñez says that their goal was to make this color table — along with the code used to generate it and the paper written discussing its design — available to everyone to help aid in the problem the team had identified.
Nuñez, Renslow, and Anderton hope that their work will increase the awareness and availability of CVDfriendly color tables, helping around 600 million people worldwide with CVD.
With equationbased modeling, part of the core functionality of COMSOL Multiphysics, you can create your own model definitions based on mathematical equations and directly input them into the software’s graphical user interface (GUI).
These abilities give you complete control over your model, so you can tailor it to your exact specifications and add complexity as needed. To provide this flexibility, COMSOL Multiphysics uses a builtin interpreter that interprets equations, expressions, and other mathematical descriptions before producing a model. In addition, you can use tools like the Physics Builder to create your own physics interfaces, or the Application Builder to create entire new user interfaces.
Example of entering a custom partial differential equation into the COMSOL Multiphysics GUI.
Using this functionality, you can work with:
There is no limit to how creative you can be when setting up and solving your models with equationbased modeling, which expands what you can achieve with simulation. To show this functionality in action, let’s take a look at three examples…
In 1895, the Kortewegde Vries (KdV) equation was created as a means to model water waves. Since the equation doesn’t introduce dissipation, the waves travel seemingly forever. These waves are now called solitons, which are seen as single “humps” that can travel over long distances without altering their shape or speed.
Today, engineers use the KdV equation to understand light waves. As a result, one of the main modern applications of solitons is in optical fibers.
To solve the KdV equation in COMSOL Multiphysics, users can add PDEs and ODEs into the software interface via mathematical expressions and coefficient matching. It’s also possible to easily define dependent variables and identify coefficients via the General Form PDE interface.
With this setup, users are able to model an initial pulse in an optical fiber and the resulting waves or solitons. According to the KdV equation, the speed of the pulse should determine both its amplitude and width, which can be observed via simulation. In addition, the simulation reveals that, just like with linear waves, solitons can collide and reappear while maintaining their shape. This counterintuitive finding would be challenging to observe without simulation.
If you want to learn more about this example, see the KdV equation model in the Application Gallery.
Simulation showing how solitons maintain an intact shape when colliding and reappearing.
Moving on to a medical example, let’s see how simulation can be used to understand the rhythmic patterns of contractions and dilations in a heart. The rhythmic contractions are triggered when the heart passes an ionic current through the muscle. During this process, ions flow through small pores that exist in an excitation (open) or rest (closed) state within the cellular membrane. As such, to gain a better understanding of heart patterns, the electrical activity in cardiac tissue needs to be examined.
Studying the electrical signals in a heart is not a simple process and involves modeling excitable media. To address this challenge, users can implement two sets of equations to describe various aspects of the electrical signal propagation. One such example is the Electrical Signals in a Heart model, provided through the courtesy of Dr. Christian Cherubini and Prof. Simonetta Filippi from the Campus BioMedico University of Rome in Italy. The equations used in this model, FitzHughNagumo and complex GinzburgLandau, are included in the PDE interfaces available in COMSOL Multiphysics.
By using the FitzHughNagumo equations to simulate excitable media, it is possible to create a simple physiological heart model with two variables: an activator (corresponding to the electric potential) and inhibitor (the voltagedependent probability that the membrane’s pores are open and can transmit ionic current). Using these equations and various parameters, users can visualize a reentrant wave that moves around the tissue without damping, which results in a characteristic spiral pattern. In the context of electrical signals, this pattern could generate effects similar to those of arrhythmia, a condition that disturbs the normal pulse of a heart.
Solving the FitzHughNagumo equations at times of 120 (left) and 500 (right) seconds.
The complex GinzburgLandau equations help to model some parts of the transition from periodic oscillatory behavior to a chaotic state. During this transition, the amplitude of oscillations gradually increases and the periodicity decreases. These equations are used to study the dynamics of spiral waves in excitable media. The results show the diffusing species and the characteristic spiral patterns, which increase in complexity over time.
Solving the complex GinzburgLandau equations at times of 45 (left) and 75 (right) seconds.
Using both sets of equations enables the visualization of complicated realworld phenomena.
Lastly, let’s take a look at the Lorenz equations, which were developed to serve as a simple mathematical model for atmospheric convection. When using certain parameter values and initial conditions, a system of ODEs (a Lorenz system) has chaotic solutions. One such solution is a Lorenz attractor, which looks like a figure eight or butterfly when plotted in the phase space.
Example of the typical shape of a Lorenz attractor.
To solve the Lorenz attractor model, the Lorenz equations — a system of three coupled ODEs that contain three degrees of freedom — need to be added into the software. This is a straightforward process when using the Global ODEs and DAEs interface to define the Lorenz system.
Next, users can view an initial solution close to the attractor and study the growth of a very small perturbation to this initial data. The results (seen in the left image below) visualize how the difference between the original and perturbed problems increases over time. In addition, the simulation demonstrates that with the chosen parameter values, the Lorenz system behaves like a Lorenz attractor, with results showing the butterfly shape that these attractors are known for.
The differences between the unperturbed and perturbed solutions over time (left) and the normal pattern for a Lorenz attractor (right).
Watch a quick video introduction to COMSOL Multiphysics to learn more about the software’s key features. When you’re ready, request a software demonstration.
]]>
In the previous blog post, we discussed how to loft a geometry from imported curve data using the human head as the example of an irregular shape. Today, let’s discuss the Matterhorn, a European mountain, as an example of an irregular shape. This mountain of the Alps, which borders Switzerland and Italy, has a summit of 4478 meters (or 14,692 feet).
The east and north faces of the Matterhorn. Photo from camptocamp.org. Licensed under CC BYSA 3.0, via Wikimedia Commons.
Height data is the typical format we have when describing geographical data. Today, we’ll discuss how to import elevation data to model the irregular shape of the Matterhorn’s surface. In short, the procedure includes:
Now, let’s take a look at how to create a solid geometry of the Matterhorn in COMSOL Multiphysics.
We will use both a text file and grayscale image of the mountain’s height to create a model geometry resembling the Matterhorn. The text file is imported in an Interpolation function, while the picture is imported in an Image function. We’ll also briefly cover importing a DEM file into an Elevation function, but this is not included in the example MPHfile that can be downloaded at the end of this blog post.
In the Image function, we specify the actual maximum and minimum values in the x and y directions, as the picture only contains information on the number of pixels and the color of each pixel. As the dimensions of the geometry are 2000 meters in size, the minimum and maximum values of x and y are set to 1000 m and 1000 m, respectively. Note that if the functions are used in the definitions of the materials or physics, it is also possible to add the units of the arguments and the function.
The Settings windows of the Interpolation function (left) and the Image function (right). The size and position of the region are defined by the text file used in the Interpolation function, while the actual size of the region must be set for the Image function.
A plot showing the Interpolation function of the imported text file (left). Imported data: DHM25 © swisstopo. The color bar values represent the actual height of the mountain. The grayscale image (right) shows the height of the mountain. Note that the color bar is normalized to go from 0 to 1.
If the geographical data is in a DEM file, it is more suitable to create an Elevation (DEM) function. If the region specified in the DEM file isn’t rectangular, we can specify a height to use outside this region in the Replace missing data with edit field. In the example below, the height of the surface is set to 0 m.
An Elevation (DEM) function is used whenever a DEM file is imported. Enter a value in the Replace missing data with edit field if the region defined in the file doesn’t fill up a rectangular area. The default value is set to 0 m.
As the underlying data is now available in the model, let’s move on to creating the actual shape of the mountaintop. We use a Parametric Surface feature for this purpose.
The Parametric Surface feature is found under More Primitives in the Geometry ribbon.
The procedure is quite easy when a DEM file is imported, as we can just click the Create Surface button. This sets up a Parametric Surface feature with the maximum and minimum values in the parameter directions from the DEM file already filled in.
To create a parametric surface based on an imported DEM file, click the Create Surface button.
As the functions are slightly different, the expressions used will also differ. It is recommended to let the two parameters (s1 and s2) go from 0 to 1, so to get the actual dimensions in the final geometry, we need to reparameterize the x, y, and zexpressions.
For the Interpolation function, which is defined using the real dimensions of the Matterhorn, the expressions will look like those shown below. One way of obtaining the maximum and minimum values in the x and y direction is to first build the Parametric Surface without rescaling the expressions and then measure the x and y positions of the corners of the created surface. An alternative is to import the coordinate data into a spreadsheet editor, where it is possible to rearrange the coordinates in increasing order.
x: s1*(6.18e56.16e5) m
y: s2*(9.27e49.07e4) m
z: int1(s1*(6.18e56.16e5)+6.16e5, s2*(9.27e49.07e4)+9.07e4) m
The expressions in the Image function, in which x and yvalues go from 1000 m to 1000 m and output values go from 0 to 1, will instead look like this:
x: s1*2000 m
y: s2*2000 m
z: (44783000)*im1(s1*20001000,s2*20001000) m
Note that we also need to scale the values in the z direction when using an Image function, as it is normalized to go from 0 to 1. In the Settings windows shown below, you can see that the z position changes to 3000 to translate the surface to the correct position in space.
To get a better representation of the surface, the Maximum number of knots is increased to 300 (the default value is 20). This means that the rectangular area will be divided into, at maximum, 300 pieces in both parameter directions, creating patches. The more knots that are allowed, the more flexibility is given to adjust the patches to the given z expression, thereby improving the chances of achieving a tighter relative tolerance.
The algorithm starts by dividing the whole area into a smaller number of patches and then increasing the number of patches where the error is large. By allowing a larger number of knots, the relative error between the patch placements and the actual data points is decreased. The algorithm tries to reach the set Relative tolerance (default value is 1.0E6) by adding more knots.
When it isn’t possible to reach the tolerance, which can happen if the Maximum number of knots is set too low, a warning will be issued stating which tolerance has been used to build the surface. To remove the warning, copy the tolerance from the Warning node and paste it into the Parametric Surface feature and build it once more.
In the examples used here, the Relative tolerance is manually set to 0.002. If the number of knots is too large, it will result in a heavy geometry operation when creating the surface. There is a balance between using enough knots to get a small relative error and keeping the number of knots low enough so that the operation completes in a reasonable time. Sometimes, a smoother surface is a desired outcome, for instance, if the surface definition contains noise. In that case, reducing the Maximum number of knots will provide a surface that does not follow the noise too closely.
Settings windows of the two Parametric Surface features. The expressions have been reparameterized to keep the two parameters normalized. An increased Maximum number of knots is used to get a better representation of the surfaces.
Regardless of which method we have followed, we should now have a geometric surface object that represents the surface of the Matterhorn. However, in most simulations, a solid domain is needed. To do so, we add a Block with a size and location so that the Parametric Surface intersects the block.
The two geometry objects are then added to a Convert to Solid feature. The Convert to Solid operation creates a union of the block and surface, and in addition, it removes any parts of the surface that are sticking out of the block. In this case, where the block perfectly fits the outer edges of the surface, we could also use a Union operation and it will work just as well. Combining the surface and the block results in a solid object that consists of two domains separated by the surface of the Matterhorn.
Resulting geometries after building the Convert to Solid feature. The image to the left shows the irregular surface based on interpolated data from text file and the righthand image shows the one based on a grayscale image.
The procedure described in this blog post can be used to create a sandwichtype geometry, where the imported surfaces separate different materials, for example, if you want to take a look at the stresses in the layers of rock with different properties. In this case, follow the same procedure to generate each surface and include them all in the Convert to Solid feature.
We now have a geometry of the mountain that we can use for meshing and simulation. However, if we are only interested in an analysis of the rock, we can easily remove the upper domain that represents the air. A Delete Entities feature can be used to remove the air domain by setting the Geometric entity level to Domain and adding “domain 2″ to the selection. Now, if we rotate the mountain, we can see the resemblance to the photo of the Matterhorn shown at the beginning of the post.
The final geometries created using a text file (left) and an image (right) as input. Imported data: DHM25 © swisstopo.
Even though the two mountaintop geometries are very much alike, they still differ from each other. If meshed with equal mesh sizes, they will give slightly different meshes. This is in part due to the fact that the Interpolation and Image functions give slightly different inputs to the Parametric Surface feature.
The Parametric Surface feature itself also makes an interpolation when it adapts the surface to the knots described above, so there are two interpolations involved here. However, as long as the size of the mesh is larger than the error of the two mentioned interpolations, it will be an adequate approximation of the imported data.
Irregular shapes can also come in other types of file formats. In a previous blog post, we discuss how to create geometries out of imported meshes. Next in this series, we will demonstrate how to interpolate material data on a regularshaped domain.
Download the file used to create the example featured here via the button below.
]]>
A unit is a specific, established quantity or property of which the magnitude of that quantity can be expressed as multiples of that unit of measurement. Units of different quantities form a unit system. Over time, many unit systems have lost their significance, leaving two dominant systems: metric and English.
The International System of Units (SI) is derived from the metric system and is now the global standard. The United States, Myanmar, and Liberia are the only countries that have not fully converted to the SI. Although the majority of countries shifted to SI units for professional use, many traditional units are still in use at the local level.
The other unit system is the Imperial system, or the United States Customary Unit System, which is the modern form of English unit systems. (Although different from the Imperial system, the United States Customary Unit System will be called the Imperial system in this blog post for generality.)
Why is the distinction between unit systems important? Let’s take a look at two historical disasters to find out.
In the 17th century, King Gustav II Adolf of Sweden aimed to make his country one of the most admirable military powers in Europe. He contracted a Dutch ship building company to build four powerful ships. One of them was the Vasa Ship, which was slated to be the most powerful warship in the Baltic Sea for that era. In 1628, Vasa set off on its first journey — and sank about 1700 meters from the shipyard, just 1300 meters into its voyage.
The Vasa Ship is on display at the Vasa Museum in Stockholm, Sweden.
According to research on the Vasa Ship from as recently as 2012, a main cause of the disaster was an asymmetric ship structure: The ship was heavier on the port side than the starboard side. Investigators found that four rulers used by Vasa workmen had different standards of measurement. Two rulers used the Swedish unit for feet, with 1 foot equal to 12 inches. The other two rulers used the Dutch unit for feet, in which 1 foot equals 11 inches. By failing to convert the units into a standard unit system, the ship builders caused asymmetry in the ship structure, which was one of the main reasons it sank.
Jumping forward in time, the Mars Climate Orbiter is a famous example of a space exploration failure caused by using two different unit systems. NASA launched the Mars Climate Orbiter in December 1998 to study the climate of Mars. However, in September 1999, contact was lost with the orbiter and the mission was declared a failure.
NASA investigated what went wrong and detailed eight contributing factors in the failure report. They underlined the main cause as the failed translation of Imperial units to SI units. During the orbital insertion maneuver, the intended height of the orbiter from the Mars surface was 110 kilometers. However, it was placed on a trajectory that would have placed the orbiter at a height of 57 kilometers from the Mars surface, which caused the orbiter to come in contact with Mars’ atmosphere and disintegrate.
As it turns out, the software used to calculate the impulse necessary for maneuvering used the Imperial unit system, providing the data in poundsseconds. The software used to calculate the orbiter’s trajectory interpreted this data in the SI as Newtonseconds. This put the orbiter on the wrong trajectory.
COMSOL Multiphysics supports different unit systems and enables easy and accurate conversion between them. At the model root level (or root node), the COMSOL Multiphysics software enables you to select an appropriate unit system and its different variations:
To demonstrate the capabilities of COMSOL Multiphysics for unit systems, we use a model from the Application Library: the Tapered Cantilever with Two Load Cases.
The tapered cantilever model and different unit systems in COMSOL Multiphysics.
The choice of unit system does not constrain you to use units from the selected system. Instead, the chosen unit system is applied by default to physical quantities where no unit is specifically written. For example, by selecting SI for Unit System, you can still apply pressure in MPa by writing [MPa]
after a value. Or, prescribe displacement in inches by writing [in]
after a value. This flexibility enables you to work with different unit systems at the same time.
Take a look at the Boundary Load feature in the tutorial model. The left image below shows the boundary load in the x direction as 10[MN/m]
in SI units. However, the same load can be provided in British engineering units; i.e., Poundforce as 57101.47 [lbf/in]
. By solving the model with an equivalent boundary load in British engineering units, you get exactly same result as the SI.
The boundary load in SI units (left) and the boundary load in British engineering units (right).
You can enter material data in a unit system other than the default by simply writing units in brackets. The same logic applies to the Geometry node.
Young’s Modulus in GPa in the Material node (left) and the length in feet in the Geometry node (right).
What does the COMSOL® software do if you assign an incorrect or unexpected unit? In this case, the input display appears orange for the physics interface, physics features, and materials. An inconsistent unit can occur by summing terms with units that represent different physical quantities, such as: 273[K] + 3[ft]. A tooltip displays a message in the corresponding field.
For valid but unexpected units, the message contains the deduced and expected units in the current unit system. For a Boundary Load feature in the model discussed above, if the load is entered in kilograms, then the display of the load’s input field is orange and shows a warning message when the pointer is moved over the text. In this case, a boundary load given as 10[kg] is actually treated as 10 without units, which in the given unit system means 10[N/m].
The warning message for the unexpected unit in the Boundary Load feature.
At the Geometry node, you can specify scaled or prefix metric units of length as well as the angle in degrees or radians. For specific applications, like MEMS, you may need geometry units in micrometers instead of meters, which is the default setting. However, you have to note that the length unit for the geometry does not affect the units that include the length in the physics interfaces or any other part of COMSOL Multiphysics. The material properties from the Material Library are in SI units by default, so changing the unit system of the model does not change the material data automatically.
In COMSOL Multiphysics, parameters and variables (under Definitions) can be defined in any units by writing the unit in brackets in front of numbers. The same logic applies to functions (under Definitions), where units of arguments and functions can be specified separately. The variable, parameters, and functions of different units than the model’s unit system can be used in material data and physics features.
COMSOL Multiphysics includes postprocessing tools for even more power and flexibility, enabling you to see your results in different unit systems or different scaled/prefix units of the same unit system. For example, the default plot of von Mises stress in the Solid Mechanics interface can be visualized with 19 different units. However, when the unit system of the model is changed to the British engineering unit system, the von Mises stress can be visualized in 49 different units.
There are many examples that show the adaptability of the COMSOL software. For example, in COMSOL Multiphysics, both names and symbols can be used for a unit. You can denote an electric current in SI units as either 2.4[ampere] or 2.4[A].
In this blog post, we’ve demonstrated how the COMSOL Multiphysics software deals with different unit systems, the flexibility of using and mixing units from different unit systems, and how to handle unexpected units. This functionality empowers you with more modeling flexibility and reduces chances of error while dealing with different unit systems or derived/prefix units of the same system.
Want to see how the flexible unit functionality of COMSOL Multiphysics can benefit your modeling and simulation? Learn more about COMSOL Multiphysics via the button above and when you’re ready, request a software demonstration.
The beam of light that Erasmus Bartholinus observed traveling straight through the crystal is called an ordinary ray. The other light beam, which bends while traveling through the crystal, is an extraordinary ray. Anisotropic materials, such as the crystal from the stone and bench experiment described above, are found in applications ranging from detecting harmful gases to beam splitting for photonic integrated circuits.
Ordinary and extraordinary rays traveling through an anisotropic crystal.
In a physical context, when an unpolarized electromagnetic beam of light propagates through an anisotropic dielectric material, it polarizes the dielectric domain, leading to a distribution of charges known as electric dipoles. This phenomenon leads to induced fields within the anisotropic dielectric material, wherein two kinds of waves experience two different refractive indices (ordinary and extraordinary).
The ordinary wave is polarized perpendicular to the principal plane and the extraordinary wave is polarized parallel to the principle plane, where the principal plane is spanned by the optic axis and the two propagation directions in the crystal. Because of this behavior, the waves propagate with different velocities and trajectories.
In a previous blog post, we discussed silicon and how its derivative, silicon dioxide, is used extensively in photonic integrated chips due to its compatibility with the CMOS fabrication technique. Bulk silicon, which has an isotropic property, is used to develop prototypes for photonic integrated chips. However, due to unique optical properties such as splitting beams and polarizationbased optical effects, anisotropy comes into play at a later stage.
Anisotropy in silicon photonics occurs unintentionally due to the annealing process while fabricating the waveguide. The difference in thermal expansion between the core and cladding causes geometry mismatch due to stress optical effects, which results in effects such as mode splitting and pulse broadening. Anisotropy could also be intentionally introduced by varying the porosity of silicon dioxide. This enables researchers to work with a range of effective refractive indices from silicon dioxide (n ~1.44) to air (n ~1), giving them the edge to perform very sensitive optical sensor applications.
To perform qualitative analyses of anisotropic media, researchers investigate how optical energy propagates within planar waveguides (also known as modes of propagation). In planar waveguides, we define modes using and terminology (Ref. 2), where x and y depict the direction of polarization and p and q depict the number of maxima in the x and ycoordinates.
Picture it this way: You are walking on an “landscape” (as shown below). The “winds” (polarization) are along ±x direction, and you encounter two distinct peaks when traveling from the x to +x direction. When you move from the y to +y direction, you observe both of the peaks simultaneously.
Mode analysis of the planar waveguide. Top row, left to right: and . Middle row, left to right: and . Bottom row, left to right: and . The arrow plot represents the electric field; contour and surface plot represent outofplane power flow (red is high and blue is low magnitude).
Before launching a beam of light through a waveguide using a laser source, it is important to know which optical modes could persist within a specified core/cladding dimension of the waveguide. Performing a mode analysis using a full vectorial finite element tool, such as the COMSOL Multiphysics® software, could be very helpful to qualitatively and quantitatively analyze the optical modes and dispersion curve respectively.
Performing a modal analysis on any isotropic material requires the definition of a single complex value, while in the case of an anisotropic material, a full tensor relative permittivity approach is required. The electric permittivity essentially relates the electric field with the material property. Here, tensor refers to a 3by3 matrix that has both diagonal (_{xx}, _{yy}, _{zz}) and offdiagonal (_{xy}, _{xz}, _{yx}, _{yz}, _{zx}, _{zy}) terms as shown below.
However, for all materials, you can find a coordinate system in which you only have nonzero diagonal elements in the permittivity tensor, whereas the offdiagonal elements are all zero. The three coordinate axes in this rotated coordinate system are the principal axes of the material and, correspondingly, the three values for the diagonal elements in the permittivity tensor are called the principal permittivities of the material.
There are basically two kinds of anisotropic crystal: uniaxial and biaxial crystal. With a suitable choice of coordinate system, where only the diagonal elements of the permittivity tensor are nonzero, in terms of optical properties, uniaxial crystal considers only the diagonal terms, that is _{xx} = _{yy} = (n_{o})^{2}, _{zz} = (n_{e})^{2}, where n_{o} and n_{e} are the ordinary and extraordinary refractive indices. However, when , it is known as a biaxial crystal.
To put this argument into a modeling perspective, we can extend the buried rib waveguide example from this blog post on silicon photonics design. We can perform a modal analysis on the 2D cross section of the waveguide with the square core and cladding length of 4 um and 20 um, respectively (shown below). The operating wavelength for all the cases is considered as 1.55[um].
Schematic of 3D buried rib optical waveguide where the mode analysis was performed at the inlet 2D cross section. The intensity plot and arrow plot representing the mode and polarization of Efields respectively.
Core of the rib waveguide depicting the optic axis (red) along the xaxis and the principal axis (blue).
In the classic case of a uniaxial material, we assume the optic axis (i.e., caxis) is along the principal xaxis (as shown above) and consider the diagonal relative permittivity _{xx} and _{zz} terms (which are orthogonal to the caxis) as the square of ordinary refractive index (~1.5199^{2} ~ 2.31). The _{yy} component element that is along the caxis is considered as the square of extraordinary refractive index (~1.4799^{2} ~ 2.19) (as per Ref. 3). In addition, the offdiagonal terms are considered zero (as shown below) and the cladding has an isotropic relative permittivity (~1.4318^{2}). The optical modes derived are the 6 modes shown above. Note the difference in the refractive indices: “n_{xx} – n_{yy}” is known as birefringence, where n_{xx} = and n_{yy} = .
Relative permittivity tensor with diagonal elements.
By evaluating the optical modes, we can visually comprehend the behavior of the optical waveguide. However, the dispersion curves could also be handy for performing quantitative analyses. A dispersion curve represents the variation of the effective refractive index with respect to the length of the waveguide or the operating frequency.
A modal analysis is performed while parametrically sweeping the length of the waveguide from 0.5 um to 4 um to derive the dispersion curve for the anisotropic core, as shown in the figure below. We assume the earlier case stated, with diagonal anisotropy terms of the core (i.e., _{xx} = 2.19, _{yy} = _{zz} = 2.31 and all of the diagonal elements are zero). The results are compared with Koshiba et al. (Ref. 3).
Dispersion curve with transverse anisotropic core.
When the optic axis (i.e., caxis) lies in XY plane and makes an angle of with the xaxis, the diagonal components _{xx}, _{yy}, _{zz} and offdiagonal components _{xy} and _{yz} are nonzero, while the rest of the components are zero. The full relative permittivity tensor could be evaluated by using the rotation matrix [R] as shown below, where the rotation matrix [R] is specifically for rotating the caxis in the XY plane. _{xx} is the square of the extraordinary refractive index (~2.19), because the caxis lies along the principal xaxis, while _{yy} and _{zz} are the square of the ordinary refractive index (~2.31). The offdiagonal elements _{xy} and _{yz} are derived from the multiplication of the matrices as stated below.
The caxis lying in the XY plane and making an angle of with the xaxis.
The relative permittivity tensor ε is treated along with a rotation matrix, rotating the caxis in the XY plane with angle .
Finally, the modal analysis of the waveguide with offdiagonal anisotropic core and isotropic cladding, where the optic axis makes angles of 0, 15, 30, and 45 degrees with respect to the principal xaxis, as shown below. Here, it could be observed that the direction of the inplane magnetic field changes according to the change in the angle of the optic axis. The dispersion curve could also be plotted by parametrically sweeping the length of the core and cladding from 0.5 um to 4 um, while considering the angle as 45°. The dispersion curve tends to be similar to the dispersion curve of the diagonal anisotropy, as discussed above.
Mode analysis, including offdiagonal terms, for θ = 0° (topleft), θ = 15° (topright), θ = 30° (bottomleft), and θ = 45° (bottomright). The figure represents the magnetic field lines within the core for different rotation angles.
Finally, when considering the longitudinal anisotropy where the optic axis (i.e., caxis) lies in the YZ plane and makes an angle of with the yaxis, the diagonal components _{xx}, _{yy}, _{zz} and the offdiagonal components _{yz} and _{zy} are nonzero, while the rest of the components are zero. The relative permittivity tensor could be evaluated by using the rotation matrix [R] as shown below, where the rotation matrix [R] is specifically for rotating the caxis in the YZ plane. _{yy} is the square of the extraordinary refractive index (~2.19), because the caxis lies along the principal yaxis, while _{xx}, _{zz} is the square of the ordinary refractive index (~2.31). The offdiagonal elements _{yz} and _{zy} are derived from the multiplication of the matrices as stated below.
The caxis lying in the YZ plane and making an angle of with the xaxis.
The relative permittivity tensor ε is treated along with a rotation matrix, rotating in the YZ plane with angle .
A modal analysis is then performed where the length of the waveguide is parametrically swept from 0.5 um to 4 um to derive the dispersion curve for the longitudinal anisotropic core, as shown in the figure below. In this case, = 45° (i.e., the caxis lies in the YZ plane and makes 45° with the yaxis) (Ref. 3).
Dispersion curve with longitudinal anisotropic core.
In this blog post, we performed qualitative analyses (modes of propagation) and quantitative analyses (dispersion curves) of the anisotropic optical waveguide using modal analysis in COMSOL Multiphysics. Diagonal anisotropy as well as offdiagonal transverse and longitudinal anisotropy were considered to derive their dispersion relationships. These types of analyses give us more flexibility when carrying out optimization of material and geometric parameters to help us gain an indepth and intuitive understanding of the physics of anisotropic materials.
A simple tutorial model to help you started would be the StepIndex Fiber, which involves mode analysis over a 2D cross section of the 3D optical fiber.
One possible format when working with scanned data is text files with coordinate data from images of slices produced by an MRI or CT scan. In this example, let’s look at a case where we have a number of files or crosssectional coordinates from different planes of a human head. Each coordinate file represents a curve of the outer surface of the head in that particular plane.
In short, the procedure includes:
Now, let’s look at each step in more detail.
To be able to import a text file in the Interpolation Curve feature, the coordinates need to be organized in the Sectionwise format. This is a native format of COMSOL Multiphysics in which the text file is organized to format one section with coordinates, one with the element connectivity, and one with data columns. The first two sections are needed here, while the data columns can be omitted when using this format for geometry creation. Below is an example of a file in the Sectionwise format:
%Coordinates One to three columns containing x, y (optional), and z (optional) %Elements Triangulation where each row contains the row indices of the points in the Coordinates section that make up one element (triangular in 2D, tetrahedral in 3D) %Data (funname) Column of data values for each point
In this example, we have 17 text files containing coordinate data from the 3D object. One Interpolation Curve feature per text file is added, which gives 17 curve objects in total. The Closed curve setting is used to ensure that the created curve objects are indeed closed and that the first and secondorder derivatives are continuous everywhere. In COMSOL Multiphysics, lofting a closed curve produces a solid object, while lofting an open curve produces a surface. The Relative tolerance is increased to 0.001 or 0.01 to produce a smoother representation of the curve. With the default tolerance (which is 0), the curves have a more jagged shape. In this example, the top of the head is represented by a point.
The Settings window for the Interpolation Curve feature (left) and all of the curves representing the outer shell of a head (right). The Relative tolerance is increased to 0.001 or 0.01 to produce a smoother representation. The Closed curve setting ensures that the curve becomes closed and has continuous first and secondorder derivatives everywhere.
Now that we have the curve objects that define the outline of each cross section of the head, we can create the solid shape with the Loft operation. The Loft operation is one of the geometry modeling tools included in the Design Module, which you can read about in this introductory blog post. Before setting up the Loft operation, we need to make sure that the curve objects are suitable as profiles for lofting. Lofting curves or surfaces to a solid requires the different profiles to have the same number of edges and points. The exception is the first and last objects (called the start and end profiles), which can be points, as is the case for the top of the head in this example.
A closed interpolation curve has two vertices, but it is not possible to choose their positions. So, the criterion mentioned above — to have the same number of edges and points of the intermediate objects — is already fulfilled, as all of the created curves have two edges. However, where these points are placed on the profile objects is also important. When the loft works its way through the curves, it connects all of the points with edges in the direction of the loft. If the points are not positioned along a fairly straight line, the resulting surfaces might become distorted. Therefore, we often need to partition the edges further to accomplish a good representation of all of the surfaces. To do this, we can use two different procedures: creating curves from data and lofting them into an object.
How to partition the edges and which features to use for this purpose is not an exact science, but something for which you use some trial and error and decide based on what looks best after a visual inspection. Here, we use the Partition Objects and Partition Edges features. The advantage of using the Partition Objects operation is that this option allows multiple curve objects to be partitioned at locations defined by the points of intersection with a selected plane. As some of the interpolation curves already contain points that are fairly well aligned on the front and back of this human head example, a Work plane is added at y = 0 to create more points along the same imaginary lines.
Partitioning some of the curves using a Work plane at y = 0. The settings for the Partition Objects feature (left). The curve objects highlighted in blue (right) are partitioned at two places using the work plane, which is pictured as a gray sheet with a rectangular grid.
The Partition Edges feature partitions selected edges based on either specified relative arc lengths or by projecting one or several vertices. As we want the vertices to line up fairly well when lofting curves, projecting vertices is a good option. However, for some edges, it is better to specify a relative arc length to have more control over where the vertex is created.
The Settings windows for the Partition Edges features, showing both the Vertex projection (left) and the Arc length (middle) specification types, and the edges selected for a vertex projection (right).
To verify that the geometry objects have the same amount of edges and points, we click the Select Objects button above the Graphics window, select a curve object in the Graphics window, and then click the Measure button in either the Geometry or Mesh tab. The output of this measurement is written to the Messages log.
Now that the points are roughly aligned, it is time to create the solid. The Loft feature contains many options, but we only use the most straightforward procedure here: adding all of the curve objects and the point on the top of the head in the Profile objects list. The start and end profiles are determined automatically by the Loft operation. As shown in the left image below, there are many collapsed sections (highlighted in blue) that can be used to finetune the loft; for example, to specify the direction of the loft. The collapsed sections are not used in this example.
The Settings window of the Loft operation (left), showing the input Profile objects, which is the only input used in this example. The right image shows the resulting solid head.
A surface or solid object lofted from closed continuous profile curves has at least two seams that go through the vertices of the profile curves, creating two face partitions. More seams may be introduced by the operation, depending on the alignment of the vertices on the different curves. If the profile curves have discontinuous tangents, additional seams are introduced and go through these points. No additional seams are introduced when using the default setting Face partitioning: Minimal in the Loft operation (see the image above), as is the case in this example.
If we want the lofted surface to be more partitioned — for example, to assign boundary conditions — the partitioning options Column and Grid can be used. The first option divides the surface along each vertex in the profile curves, while the latter also adds the profile curves. Yet another possibility is to use the different Partitions operations available in the Geometry ribbon. On the other hand, if we want to have a more clean appearance, we can use Virtual Operations to create composite faces. Ignore Edges is one of the features that can be used for this purpose, but Form Composite Faces also gives the same end result.
By adding the edges shown in the previous image to an Ignore Edges operation (left), the final geometry gets a smooth and nice look (right).
This blog post has discussed the possibility of creating curves from coordinate data and then lofting these curves into a solid object. Upcoming posts in this blog series will discuss other possible formats as well as approaches for handling irregular shapes in COMSOL Multiphysics.
Download the file used to create the example featured here:
NASTRAN is a registered trademark of NASA.
]]>
Consider the following story. A large industrial reactor showed unexpected behavior in its bubbly flow compartment. Due to very costly downtimes — and difficulty in getting good measurements — the engineering department made a CFD model to analyze the problem. The engineer creating this model had to include a bidirectional coupling, as gas bubbles affected the water pressure and turbulence and water influenced the creation and transport of the gas. As such, the model became very nonlinear.
The engineer started by using a Bubbly Flow interface on the 3D CAD model of the reactor section. All of the relevant boundary conditions, material properties, and mesh were defined, but the engineer ran into convergence issues. The model had several inlets, outlets, narrow regions, gas creation sites, and more. Since the timedependent study took about seven hours to compute, finding the cause of the trouble was an arduous task. Meanwhile, the deadline kept creeping closer and closer.
What would you do in this situation?
Sloshing inside a vehicle’s fuel tank. This example model involves computing the velocity and pressure for both the gas and liquid simultaneously. Such CFD computations in 3D are extremely valuable, but also have long computation times.
Before continuing the story, let’s discuss (in a general way) what could have happened, because there are already lessons to be learned. There are many situations where a 3D model with complicated physics is required. A common approach is to take all of the expected “ingredients”, put them together, and reach your goal. But what if something unexpected happens? What if the model doesn’t converge? Now you’re in a tricky situation, of which any of the following factors could be the cause:
A full 3D model might take anywhere from minutes to several hours to run. Thus, every mistake in a complex model takes longer to find out about. Moreover, you can’t be sure what causes the issue. A more reliable approach is to use the following steps:
This workflow is extremely efficient, because if you run into trouble, you can precisely pinpoint the cause and fix it quickly.
The File menu in COMSOL Multiphysics, showing where you can access the Application Library.
The Application Library, showing part of the results when typing “turbulent” in the search box. Every example model and app in the Application Library includes documentation and stepbystep instructions.
Let’s go back to the industrial reactor model. The COMSOL Support team recommended making a simplified 2D model. Since this model required only five minutes to run, the engineer quickly identified the culprit: One outlet pipe was cut off in the CAD model. The short outlet caused a high vortex area to conflict with the boundary condition, which was fixed at a constant pressure. A short outlet is normally not a problem, but here, the combination of the outlet vortex with 3D turbulent bubbly flow and a lessthanoptimal mesh made it a showstopper. The model ran smoothly after simply extending the outlet with an Extrude geometry operation.
A simplified 2D model that demonstrates the physical phenomenon is shown below.
Left: The original situation. A streamline plot of the velocity shows that the outlet cuts off a vortex in the flow. Right: The velocity profile with the extended outlet.
A closeup view of the velocity profiles in the outlet region. Left: The original, cutoff outlet. Right: The extended outlet. The outlet extension is hidden in this image to make a good comparison.
Computers are always getting faster, so when you have easytouse multiphysics software, it’s tempting to include all of the details in your model at once. After all, more detail means more accuracy. However, accuracy comes at the price of an increased computation time. Therefore, each modeling project should be preceded by an assessment of the required accuracy versus the time budget. This important step is often forgotten or underestimated. Think about the following scenarios and how they differ in demands:
If you start out with an unrealistic goal given the time budget — or worse, no welldefined goal — then you risk running into trouble.
In many cases, simulation projects do require high accuracy. When modeling realworld situations, multiphysics phenomena often have to be taken into account, typically in 3D CAD geometries. COMSOL Multiphysics covers the full spectrum of computational demands, from straightforward to complex. This gives you the freedom to maneuver anywhere between these extremes. Once you have determined your goal, you can outline a strategy on how to get there.
Left: A static load analysis of a bracket. Right: A coupled acousticmechanical analysis of a gearbox. Both analyses can be performed in the same interface and workflow when using COMSOL Multiphysics.
By using the steps outlined in this blog post, you will have full confidence in reaching your modeling goals before a deadline. Notice that this approach is both simple and straightforward. However, when a complication occurs, it’s easy to get stuck in the details and lose sight of the bigger picture. This can happen, even with experienced simulation engineers. One of the most important modeling skills is the ability to isolate a problem and reduce it to the very essentials when needed.
Want to evaluate the COMSOL® software for your unique needs? Contact us via the button above.
Browse more blog posts about solving models in COMSOL Multiphysics:
]]>