We are regularly adding more examples to the Application Libraries. However, as the number of models and apps in the libraries grows, the more difficult it is to find a specific one.
The Application Libraries in COMSOL Multiphysics, with the Thermal Actuator model open.
To help circumvent this catch22, you can use the search tools available in the Application Libraries to easily narrow down your search. The Application Libraries are organized by module, including subfolders for further organization, and the search field can be used to crawl any free text in model descriptions. For example, searching automotive
returns “automotive_muffler” as well as the “brake_disc” and “snap_hook” models (because both descriptions also contain the term “automotive”).
Let’s look at some alternatives to searching free text for a faster and easier way to find a specific model. (Note that the Application Libraries will only contain models and apps that you have downloaded during or after installation, so the results shown in this blog post may not match your own search results.)
Note: To get the most out of the search functionality discussed in this blog post, we recommend using version 5.3 update 3 of COMSOL Multiphysics or later.
There is a way to search through the names of applications with more purpose than by free text. To configure the search functionality to search for models based strictly on names, you can use the prefix @name:
. This tool enables you to search for an exact match, such as @name:electric_sensor
, which will return the model with that exact name. You can also search for a partial match with a specific beginning (or end), such as @name:elec*
, which will return any model that starts with “elec”. Finally, you can search for a partial match with the search string anywhere in the name, such as @name:*elec*
, which will return any model with “elec” anywhere in the name.
Search for a model by the exact name (left), text that the name begins with (center), or text that appears anywhere in the name (right).
With this capability, you can search for the exact model you are looking for instead of clicking through the folder structure, as long as you know the name. Or, you can search for a model with names that involve a key term, prefix, or suffix, such as @name:*mixer*
, @name:piezo*
, and @name:*metry
, respectively.
Searching for a model by its title is simple enough, but searching by text included in a model file is another story. The following features make it easy to search by terms in MPHfiles to find the specific model feature you are looking for. First, let’s look at how to find these search terms, in this case, tags.
In the toolbar at the top of the Model Builder window, there is a button to the far right, Model Tree Node Text. This button displays a string next to the nodes in the model tree. While you are able to search the model files with any of the Name, Tag, or Type options, the Tag option applies to the most nodes, so it is the most efficient choice.
Click to show tags for applicable nodes; {comp1} is the only tag shown in this screenshot.
You can then create queries that find all of the models containing that tag, or feature, in the format @
, removing the number associated with the tag. As we’ll discuss later on, you can further narrow your search results from a tag query.
If you are curious about a certain physics feature, such as what it is used for and how, you can search through the Application Libraries for models that use it. For example, the Release from Grid feature, included with the Particle Tracing Module, is featured in the Application Libraries in different modules.
Without being able to search these tags, there would be no practical way to find some features. Most features with a tag can be found in this way, whether it is a definition (functions, selections, probes, and couplings); geometry (primitives and operations); physics boundary condition; mesh node; study step; or results node (plot types, data sets, and derived values).
The list of models that contain the Release from Grid feature. This information is not available just by looking at the model preview on the right.
Certain physics interfaces are included in models that you might not expect. This makes it difficult to find all of the models that use an interface. The Global ODEs and DAEs (ge) interface is one example. How would you know which models use this widely applicable interface without reading through model descriptions onebyone?
Using the scoping syntax @physics:
, you can search the Application Libraries for models that include a specific interface. The search term @physics:ge
, for example, will locate all models that use the Global ODEs and DAEs (ge) interface. You only need to know the abbreviation, which can all be found in the Add Physics Settings window.
You can also find these models with the search term @ge
; however, this will also return models that contain the Global Equations node.
A search for @physics:ge
returns all of the models that use the Global ODEs and DAEs interface.
Let’s say, for example, that you want to search for all models in the Application Libraries that include a geometry sweep. You could search by tag, but both the swept mesh and sweep geometry operations have the same tag, {swe}
. The number of models that contain swept meshes far outweighs the number of models that contain geometry sweeps, so if you want to learn how to implement a geometry sweep, it becomes almost impossible to find a relevant example by querying @swe
. The solution? You can enter @geom:swe
to search for the {swe}
tag within the Geometry node only.
The query @geom:swe
returns five models that contain the Sweep geometry operation.
You can search for almost any feature within the COMSOL® software Application Libraries with the tools described above. Below, you’ll find a table of search parameters that are especially useful if you are interested in finding specific features:
Search Parameter  Use 

@axi

Returns models that contain the default Axisymmetry physics node, which is useful for finding axisymmetric models 
@gr

Returns models that include the effects of gravity with the Gravity feature 
@pml

Returns models that contain a perfectly matched layer (PML) 
@ie

Returns models that contain an Infinite Element domain 
@physics:dg

Returns models that contain the Deformed Geometry physics interface 
@genext

Returns models that contain the General Extrusion feature 
@physics:shell

Returns models that contain the Shell interface, which spans 5 modules 
@iss

Returns models that contain the Initial Stress and Strain attribute 
@study:sens

Returns models that contain a Sensitivity study 
@dataset:join

Returns models that combine two solution sets using a Join data set 
@result:str

Returns models that contain a Streamline plot (similar queries can be used for any plot type in COMSOL Multiphysics) 
@result:hght

Returns models that contain a Height Expression (a 2D plot attribute) 
If you find this functionality useful, and find a practical search parameter that you think would be helpful to others, we encourage you to let us know in the comments so we can add it to the table!
Check out the following related topics on the COMSOL Blog:
]]>
If you lived in Glasgow in the 1830s, one of the best ways to get around Scotland was a horsedrawn flyboat on the Glasgow, Paisley and Ardrossan Canal. Flyboats were lightweight, long, and narrow. They were pulled through the shallow water of the canal by horses.
A modern view of Glasgow, Scotland, (left) and a canal in central Scotland, home of the rotating boat lift known as the Falkirk Wheel (right).
One day, William Houston, an owner of one of the flyboat companies, was traveling down a canal when the horse pulling his flyboat was startled and bolted. Houston noticed something strange: The water exhibited no resistance. The ship glided fast (similar to what we now call “hydroplaning”), and the turbulence from the boat’s movement didn’t damage the shores of the canal. (Ref. 1)
Enter John Scott Russell, a naval architect from Glasgow. When he heard about the phenomenon Houston observed, he thought that witnessing it for himself could give him insight into boat design. While observing the canal, one of the horses suddenly stopped. A wave formed under the middle of the boat, moved to the prow, and then took off past the boat entirely. Scott Russell followed the wave, first on foot and then on his horse. He was astounded to see that the wave kept going at the same size and pace. He later called it “the wave of translation” and described the event as “the happiest day of my life.” (Ref. 2)
John Scott Russell in 1847. Image in the public domain in the United States, via Wikimedia Commons.
Scott Russell devoted two years to replicating the wave of translation so he could study it further. He even built a 30foot basin to test his different theories. Eventually, he observed some unique properties of these waves, which he now called solitary waves. According to Scott Russell, a solitary wave:
Animation showing the behavior of a solitary wave.
He also categorized these waves into four types:
Scott Russell presented his findings at a British Science Association meeting in Edinburgh, describing the waves and the mechanics behind them. His work contained a few misinterpretations of the foundations of mechanical laws — understandably so, as his background was in architecture, not physics. Scientists at the time balked at his theories and distrusted his lack of engineering and physics expertise.
And so began a lifetime — actually, many lifetimes — of investigating the behavior of solitary waves.
At first, Scott Russell didn’t earn many fans in the scientific community regarding his theory of solitary waves. George Biddell Airy, who studied the behavior of waves in relation to the tides, did not hold Scott Russell in high regard and believed that solitary waves contradicted PierreSimon Laplace’s theory of fluid mechanics. Laplace’s equations, as we know, are integral to the study of fluid dynamics today. (Ref. 1)
George Gabriel Stokes, of the famed NavierStokes equations, also did not support the possibility of solitary waves and initially condemned their relation to tides and sound. Over time, as he researched finite oscillatory waves, he changed his stance and admitted that a theoretical solitary wave was plausible. (Ref. 1)
Scott Russell never gave up the subject of solitary waves, but he continued what his main job description entailed: building ships. He used his research into waves to design a special ship prow, which was based on the shape of a versed sine wave, that could better handle water resistance.
In the 1850s, Isambard Kingdom Brunel asked for Scott Russell’s help regarding a steamship called the Great Eastern. Brunel had designed the ship, which was slated to be the largest ship of its time (he affectionately called it his “Great Babe”). The ship could purportedly travel from England to Australia without needing to refuel.
The Great Eastern before its first launch. Image in the public domain in the United States, via Wikimedia Commons.
Scott Russell was successful in taking Brunel’s designs and building the Great Eastern, which eventually made many transatlantic voyages. However, his success was tarnished when Brunel passed away shortly before launch and a major accident occurred during the ship’s maiden voyage. Around this time, Scott Russell was also having trouble with his finances and was dealing with the seizure of his assets.
Toward the end of his life, Scott Russell’s solitary wave was still rebuked by scientists, and he was bankrupt. Fortunately, things were about to change.
Joseph Valentin Boussinesq, a protégé of Adhémar Jean Claude Barré de SaintVenant known for his contributions to the field of fluid dynamics, did not write off Scott Russell like others at the time. Instead, he subjected every aspect of waves and tides to mathematical analysis. In 1872, Boussinesq attempted to explain shallow water waves, which led to an equation that proved solitary waves are theoretically possible. He even mentioned Scott Russell in his paper on the subject in 1877. Lord Rayleigh independently developed similar theories on waves and also supported Scott Russell. (Ref. 1)
Finally, two scientists had spoken out in support of his work! Then, in 1882, Scott Russell passed away at the age of 74 on the Isle of Wight, England. But the story doesn’t end there.
In 1895, Diederik Korteweg and Gustav de Vries expanded on Boussinesq’s work and developed an equation that proves solitary waves are theoretically possible. The KdV equation doesn’t introduce dissipation, which means it can be used to describe waves that travel long distances while retaining their shape and speed. The equation is also simpler than Boussinesq’s version and gives a better solution. (Ref. 2) Because of these advantages, the KdV equation is still used to understand wave behavior — in its many forms — today.
Research into solitary waves picked up in 1965, when researchers Martin Krustal and Norman Zabusky studied the KdV equation in more detail. They found that solitary waves can occur not just theoretically but also naturally, coining the term solitons to describe them. In addition, solitons were not just thought of in the context of water waves anymore — with research into applications for optics, acoustics, and other areas.
Krustal, along with researchers Gardner, Greene, and Miura, developed the inverse scattering transform in 1967. This method can be used to find the exact solution of the KdV equation and also demonstrates the elastic collisions between waves, originally observed by Krustal and Zabusky. (Ref. 2)
A hydrodynamic soliton. Image by Christophe.Finot et Kamal HAMMANI. Licensed under CC BYSA 2.5, via Wikimedia Commons.
Moving forward, scientists Zakharov and Shabat used a formulation developed by Peter Lax to solve the nonlinear Schrödinger equation, which describes the evolution of the slowly varying envelope of a general wave packet. Ablowitz, Kaup, Newell, and Segur later came up with a more systematic approach to solving the nonlinear Schrödinger equation, which is now known as the AKNS method. (Ref. 2)
All of this mathematical activity around solitons caught the attention of the scientific community in a way that Scott Russell couldn’t. Over the next 30 years, solitons in a wide range of different fields were researched, including geomechanics, oceanography, astrophysics, quantum mechanics, and more.
Optical fibers are an important and practical application area for solitons. The linear dispersion properties of a fiber level out a soliton, while its nonlinear properties help the soliton achieve a focusing effect. The result is a very stable pulse that can travel for what seems like forever.
The behavior was first observed by a group at Bell Labs led by Linn Mollenauer in the 1980s that aimed to apply solitons to longdistance telecommunication systems. In the 1990s, a team of MIT researchers added optical filters to a transmission system in an attempt to maximize an optical soliton’s propagation distance. Using this method, Mollenauer’s group sent a 10Gbit/s signal more than 20,000 km — impressive for the time. (Ref. 2) During the 2000s, optical soliton research ventured into the field of vector solitons, which have two distinct polarization components.
Current research from Lahav et al. aims to create solitons that are stable in all three dimensions, known as “light bullets”. This requires the simultaneous cancellation of diffraction and dispersion, which has been achieved with a highly structured material, but not with an unstructured one that can be used in practical applications. The Lahav group has investigated the fundamental properties of 3D solitons to develop more technological applications of solitons for fiber optics. They have also developed a method that involves shining a repetitive string of light pulses into a special material called “strontium barium niobate” to create a selfguided beam and cancel out the dispersion. This method creates a string of 3D solitons that could potentially be used in advanced nonlinear optics and optical information processing. (Ref. 3)
The solution to the KdV equation tells us that the speed of a soliton determines its amplitude and width. By investigating this effect, we can better predict the behavior and limitations of solitons for optical applications. Simulation can be used to visualize soliton behavior beyond numerical equations, without setting up resourceintensive or costly optical experiments. Besides demonstrating how speed influences the amplitude and width of a wave, simulation also shows how solitons collide and reappear while maintaining their shape (like the solitary waves Scott Russell observed “overtaking” each other in the ocean).
Simulation results that show solitons colliding and reappearing. Image from The KdV Equation and Solitons tutorial model.
Predefined physics settings are an efficient and easy option for straightforward modeling tasks; however, optical solitons are anything but. Equationbased modeling enables you to expand what is normally possible with simulation for problems that require flexibility and creativity. Using equationbased modeling, you can seamlessly implement the KdV equation into the COMSOL Multiphysics® software by adding partial differential equations (PDEs) and ordinary differential equations (ODEs). You can even create a physics interface from your custom settings so that you don’t have to start from scratch the next time you need to set up a model.
Adding a PDE to a soliton model in the COMSOL Multiphysics graphical user interface (GUI), an example of equationbased modeling.
Equationbased modeling functionality makes it possible to simulate an initial pulse in an optical fiber as well as the resulting waves or solitons.
In 1885, Scott Russell’s book The Wave of Translation in the Oceans of Water, Air and Ether was published posthumously. It included his speculations on the physics of matter and how we can find the depths of the atmosphere and universe by computing the velocity of sound and light, respectively. (Ref. 4) Even at the end of his life, Scott Russell continued to theorize on how we can apply mathematics to the observable world as well as the significance of solitons in modern physics. If only he could have seen the development of the KdV equation or recent advancements in optics.
One of Scott Russell’s supporters, Osborne Reynolds, made a fitting observation in his own research of solitary waves: In deep water, groups of waves travel faster than the individual waves from which they were made. (Ref. 1) Perhaps we can think of John Scott Russell as the individual wave, inspiring others to keep moving toward a common goal.
Learn more about equationbased modeling, and the other features and functionality in COMSOL Multiphysics, by clicking the button below.
Suppose that you have a set of external data (either from experimental measurements or a collection of reference data) that you want to model your simulation after. In this situation, you can use inverse modeling. As the name implies, inverse modeling is when you take a reverse modeling approach to your problem: Instead of solving for the outcome, you solve for the inputs.
To get the desired simulation results, there are several model inputs that you might want to investigate or experiment with, such as material properties. When solving for the values of these inputs, you are looking for the optimal values that provide you with the closest match between a set of external data and the simulation results. A natural approach is to minimize the sum of the squares of the differences between the data sets. As such, an efficient modeling strategy is to formulate the problem as a leastsquares optimization problem. To streamline the process of setting up and solving the problem, you can use the Parameter Estimation study step in the COMSOL Multiphysics® software.
To use the Parameter Estimation study step, the study must be time dependent and a license for the Optimization Module is required. In addition, a set of reference data needs to be included through either an interpolation function or userdefined reference expression. Note that the reference data must either be time dependent or a function of a single argument.
The Settings window for the Parameter Estimation study step.
The Parameter Estimation study step is useful for a variety of inverse modeling problems — mainly parameter estimation. The objective is to estimate values for the desired model inputs (i.e., parameters), which provides insight into the ways that the values (and hence the properties themselves) affect the objective function.
Perhaps one of the most typical uses of this functionality is curve fitting or similar datafitting applications. This process involves fitting a function to a series of data points. The fitting of the function is done by estimating the values for the coefficients used in the function, essentially fitting a parameterized analytic function to a collection of data. By fitting a curve to a set of data points, we can interpolate values from the function to areas where data isn’t explicitly available.
In the tutorial video at the top of this blog post, we demonstrate a parameter estimation via a modified version of the elbow bracket tutorial model. Before computing this study, we need to properly define the problem…
Performing a parameter estimation study generally involves three major steps:
Let’s look at how to complete these steps and the important factors to consider when setting up the Parameter Estimation study step in a model.
Before we perform a parameter estimation study, we must create the definitions needed to formulate the problem. This typically involves creating a combination of parameters, functions, and variables. First, we define the parameters of the model inputs for which we want an estimated value. Next, we include the external data by defining either a reference function or expression. Lastly, we define a variable that pulls and evaluates the output quantity from the simulation results, which are compared to the measured output data.
In the video above, we perform a timedependent heat transfer analysis on the elbow bracket. The model data from the heat transfer simulation is then compared to the experimental data, which is used to estimate the value for the thermal conductivity of the material.
In the Heat Transfer in Solids node, the thermal conductivity is represented by k. Hence, we create a parameter named k, enter a rough estimate of its value, and use it in the appropriate node to define the thermal conductivity.
Left: The parameters used in the parameter estimation study, including the parameter k for estimating the thermal conductivity. Right: The node (named Solid 1) in which we use the parameter k to define the material property to be estimated.
Next, we create a definition so that we can implement the data from our external file into the COMSOL® software. In this case, the reference data is a collection of timedependent temperature measurements contained in a commaseparated values (CSV) file. This data can be quickly and easily entered into COMSOL Multiphysics by adding an Interpolation function to our model component and then using the Load from File button. The data is automatically imported in a tabular format, with the first column containing the times and the second column containing the temperature measurements.
The Interpolation function incorporates the reference data into the simulation. The Load from File button is used to import the external file into the function.
Under the Units section, we simply enter the respective units for the argument (time) and function (temperature). We do not need to be concerned with the options selected for the Interpolation and Extrapolation settings of the function, since the study only computes the differences at the times explicitly stated in the argument or t column of the function. Thus, the smoothing between data points and the behavior of the function outside of the range of the data is not relevant.
We now need to define an expression to extract the temperature quantity from the simulation results. (This quantity is later compared to the temperature measurements in the interpolation function.) The quantity we want to extract and use for comparison is the average temperature of the surface on the topright end of the bracket.
Since we want to obtain the average of a quantity (temperature), we first add an Average component coupling under the Definitions node. We then select the geometry we want to obtain the average temperature for (i.e., the boundary on the topright end of the elbow bracket). Note the tag in parentheses to the right of the Average component coupling, aveop1, as this will be used in the expression for defining our variable.
The Average component coupling (highlighted in blue) helps us obtain the average value of a quantity on the selected geometry.
To compare the computational and experimental results, we must define a variable to extract the quantity, and thus its value when computed, from the simulation results. Since we are looking at a specific part of the geometry, we define a local variable under the Definitions node or the Definitions ribbon tab. (A global variable is not suitable for our study, as the Global Definitions node is global in scope and defines, applies, or evaluates an expression over the entire model geometry.)
When defining the variable, we name it Tave, since we are looking to obtain the average (ave) temperature (T). For the expression, we can call out to the Average component coupling created earlier by entering aveop1. We then specify the quantity that we want the average of by entering the variable T (for temperature) in parentheses.
The defined variable, which is later used in the Parameter Estimation study step.
Now we can add and set up the Parameter Estimation study step, for which several settings have already been handled. The reference data and study step selections are automatically linked to the interpolation function that contains the external data and the timedependent study.
The expression entered in the Model expression field is evaluated and then compared to the external data at each time step specified in the reference function; i.e., each time in the argument column of the interpolation function. As such, this field is where we enter our local variable, Tave.
Notice that in the syntax of the Model expression field in the images below, we specify the location of our local variable, component 1, by including it before the variable name. The reason is that the scope of the Parameter Estimation study step is global. As a result, the study step does not “see” variables defined locally within a component unless we indicate their scope in the expression. For a global variable, we simply enter its name in the Model expression field. To enter the variable with its scope specified automatically, you can use the Auto Completion feature to select and enter the variable from the list of definitions.
Left: Using the Auto Completion feature to select the local variable defined earlier. Right: A screenshot of the completed setup for the Parameter Estimation study step.
Now we just need to identify the parameters we want to estimate and select the optimization method. We provide a rough estimate for the parameter under the Initial value column, add an upper and lower bound for the values that the parameter can take, and set the Scale value. Applying the appropriate scale is important, as it can significantly slow down the convergence of the optimization solver or stop it from converging altogether. Using the default value works for the elbow bracket example, but it might not always be suitable. (For more information, read the chapter on parameter estimation in the Optimization Module User’s Guide.)
Next, we want to select the appropriate optimization algorithm and compute it. The different methods apply to certain use cases, which are discussed in further detail in the video. Since the parameter we want to estimate controls the value of a material property — and we want to impose an upper and lower bound on the value — we use the SNOPT method.
Once the model finishes solving, we need to perform some additional postprocessing. We can visualize and compare the two sets of data by plotting the results, and then we can extract the optimal values for the estimated model inputs. To see and compare both sets of data on a single plot, we add a new 1D Plot Group and include two Global plots under it. One Global plot displays the simulation results, while the other displays the reference data.
Since our interpolation function is a collection of data points, we want it to appear as a point graph. To do so, we change the time selection of the data points so that they are only plotted at the times specified by the argument column of the function. Additionally, we adjust several of the settings in the Coloring and Style section to further distinguish the two sets of data.
Comparison of the simulation and experimental results. The highlighted sections in the Settings window need to be adjusted to display the experimental data as a point graph.
This plot shows that the model results closely match the experimental data. We can now extract the optimal value for the thermal conductivity of the elbow bracket material. To do so, we add a Global Evaluation node under the Results tab (or under the Derived Values node). In the Settings window, all we need to do is update the Time selection option to Last so that we can evaluate the value in the last time step, since the thermal conductivity is independent of time.
The Settings window for the Global Evaluation node.
After entering the parameter k in the Expression field, we click Evaluate and are provided with the optimal value for that parameter.
The optimal value for the thermal conductivity.
As shown above, a thermal conductivity of ~0.27 W/(m*K) provides the simulation temperature results that best match measurements from experimental data.
In COMSOL Multiphysics, the Parameter Estimation study step helps estimate the optimal values for the inputs of a simulation. By estimating the values that define various aspects of a model, you can investigate the parts of the problem that either hinder or help in best matching the computed results with a data set from an external file.
This functionality also enables you to solve other types of inverse modeling problems by streamlining and expediting the process of defining, setting up, and solving a leastsquares optimization problem. For more details about this useful study step, watch the tutorial video at the top of this post, which provides guidance on how to use this functionality with a simple example.
To learn more about parameter estimation in COMSOL Multiphysics, its use cases, and the various study step settings, read the chapters on parameter estimation in both the Introduction to the Optimization Module and Optimization Module User’s Guide documentation.
An archived webinar on parameter estimation is also available. (Note that the archived webinar uses the Optimization interface instead of the Parameter Estimation study step.)
]]>
Say that, by some measurement technique, we have been able to obtain data that describes how a material property varies inside a representative volume of a material. By visualizing the data, we can identify several regions with distinct properties; for example, the pores and the solid regions in a block of porous material. But what do we do if the identified regions form highly irregular shapes that are not suitable for generating geometry in the COMSOL Multiphysics® software?
If we have access to the material properties in coordinates inside our representative volume, the shape of which we can easily draw, then we can use an interpolation function based on this data to define the material for our simulation, thus skipping the creation of the irregularly shaped geometry object altogether.
One example is in the PoreScale Flow tutorial model, where we solve the incompressible, stationary Brinkman equations on a rectangularshaped domain with material parameters based on an image function.
The image function representing the porous medium used in the PoreScale Flow tutorial. The color ranges from 0 to 1, where blue (0) represents the fluid and red (1) the solid.
In the regions marked with red, the material parameters are set to imitate a solid, and the material parameters for the fluid are defined in the blue region.
Let’s return to the previously discussed model of the human head, which we lofted based on curve data in different cross sections. Say we don’t have the curve data needed to loft the head or access to the Loft operation in the addon Design Module. Instead, we have a text file with the material properties defined in coordinates inside the region. Let’s start by introducing the tutorial where the geometry of the head is used.
Slice plot showing the electrical conductivity (S/m) for the air (dark blue) and the head.
The Specific Absorption Rate (SAR) in the Human Brain tutorial models the absorption of a radiated wave from an antenna into a human head, as well as the resulting increase in temperature. The patch antenna is placed on the left side of the head and is excited by a lumped port. The tutorial described in this blog post demonstrates how to set up the model without the geometry of the head and shows what modifications are needed to accomplish this.
Specific Absorption Rate (SAR) in the Human Brain tutorial model. The isosurfaces show the temperature increase, dT (K), as a result of the absorbed radiation from the antenna.
The goal is to import material data into interpolation functions that we can use to define the material properties for the computational domain, which will become much simpler. Without the head geometry, we only need the sphere, which represents both the head and surrounding air, and the domains for the perfectly matched layers (PMLs). We also keep the smaller block with added edges, which represents the patch antenna.
Geometry for the simulation without the geometry object for the head.
A text file with material data can be formatted in different ways. For this example, the Spreadsheet format is used. In such a text file, there is one column each for x, y, and zcoordinates followed by an arbitrary number of data columns. The columns in the text file of our model include:
x, y, z
k (W/(m*K))
epsr (1)
mur (1)
sigma (S/m)
omega_head (1/s)
If the text file contains NaN entries, it is usually recommended to replace them with zero, a large value, or a small value, depending on the material property, as it triggers solver errors if the functions interpolate to NaN in the calculation domains. We create one Interpolation function per data column in the file and add the units for the arguments as well as the function itself so that the units are correctly recognized in the physics. When all data columns have the same unit, it is possible to create all functions within the same Interpolation feature.
We create the Interpolation features in the Global Definitions section of the Model Builder, since we want the material property functions not only for defining the material properties for the physics but also for helping generate a finer mesh where the surface of the head would be. To do this, we use a Size Expression feature when generating the mesh, as discussed later in the blog post.
The Settings window for one of the interpolation functions. The number in the Position in file field corresponds to the number of the data column, starting from 1. In this image, sigma_int is taken from the fourth data column in the text file. The unit of the function can be entered in the bottom of the Settings window.
When geometry boundaries define the transition from one domain to the next, the mesh must always follow the shape of the faces that separate the domains. Here, there are no faces that represent the head, so we need to manually tell the mesh algorithm where it is important to refine the mesh. The mesh size can be about the same inside the head as outside the head, so it is most important to resolve the border between the two materials (head and air). In general, it is important to resolve a change between materials as that will typically impose a gradient in the fields. The larger the gradients, the finer the mesh must be to resolve the transition. For some applications, it is important to also resolve the mesh in the complete domain and not just at the borders between the different materials.
To refine the mesh along the shape of the head, we add two global variables based on the interpolation function k_int
, as that function is 0.5 W/(m*K) and almost zero in the air. The variable avMat
is 1 inside the head and onBnd
is 1 in the vicinity of the boundary:
avMat = (k_int(xd,y,z)/0.5+k_int(x+d,y,z)/0.5+k_int(x,yd,z)/0.5+k_int(x,y+d,z)/0.5+k_int(x,y,zd)/0.5+k_int(x,y,z+d)/0.5)/6 onBnd = (avMat>0.01)*(avMat<0.99)
The parameter d
is equal to the fine mesh size that is used in the Size Expression node and defines the thickness of the area where the mesh is refined (highlighted in red in the following image).
A slice plot of the onBnd variable. The values range from zero (blue) to 1 (red). The mesh is refined in the region shown in red using a mesh Size Expression.
Under the Free Tetrahedral node in the Mesh sequence, a Size Expression feature node is added to specify an expression that determines the mesh size to use. Note that the expression should evaluate to the size you want to use for your mesh (in meters if you are using SI units).
We use the default option, Evaluate on: Grid. This means that the expression is evaluated on a regular grid and interpolated between the grid points. This grid is not visible in the model but merely used to evaluate the size expression. When using this setting, it is important that all variables and functions used in the expression are defined under Global Definitions. We also increase the setting Number of cells per dimension to 50 (the default value is 25) to better resolve the region with a finer regular evaluation grid. Using any of the other options in the Evaluate on list gives the possibility to manually control the mesh on which the evaluation is done. The Size expression we use is defined as
onBnd*Fine+!onBnd*Coarse
where the two parameters, Fine and Coarse, are defined as 0.007 m and 0.07 m, respectively. When they are defined as parameters, they can be easily changed manually or varied in a parameter sweep by the Parametric Sweep study. There is a plot group called Mesh plot in the tutorial model that shows a cross section of the mesh used for the calculations. You can learn more in a previous blog post on visualizing mesh in greater detail.
A mesh plot using a mesh Filter, plotting the mesh element size for x < 0. This image shows a much more refined mesh to make the plot clear. The tutorial model available for download uses a coarser mesh to save memory and time.
As mentioned above, we will only discuss the differences in setting up the materials and physics compared to the original tutorial. You can get details about the materials and physics used in the Specific Absorption Rate (SAR) in the Human Brain tutorial model. We keep Material 1 as the material for the antenna. A new material is added for the rest of the domains (the air and head). The material properties are defined using the interpolation functions k_int(x,y,z)
, epsr_int(x,y,z)
, mur_int(x,y,z)
, and sigma_int(x,y,z)
.
We select domain 5 (the air and head domain) to be active in the Bioheat Transfer physics interface. In the Bioheat subnode, two of the material properties for the blood are defined using the global variable avMat
to make sure the value of the material properties is zero outside the head. For example, Specific heat, blood is defined as c_blood*avMat
, where c_blood
is a parameter. The Blood perfusion rate is set to the interpolation function omega_blood(x,y,z)
. We are now done with our modifications and are ready to compare the results after solving the model.
As seen in the images showing the SAR value, the radiation is absorbed in a similar manner in the two models. You can see the shape of the head in the slices, even though the edges of the slices are rather rough compared to the original model with the geometry object for the head. Manual color and data ranges are used for the plot in this tutorial, using interpolation functions to filter out the shape of the head.
Logscale slice plot of the local SAR value. The left image shows the original tutorial and the right image shows the results from the model using interpolated materials.
We use a Volume maximum feature to calculate the maximum temperature rise in the head. This value is about 0.15 K in the original model and about 0.17 K in the model with the interpolated materials.
The two main components to get adequate results are:
It is possible to have material data that is much more resolved than the actual calculation mesh, but it will not help to make a finer calculation mesh if the material data isn't resolved enough to match the calculation mesh. While a fine calculation mesh requires more memory and time to solve, betterresolved material data usually just takes more time to solve, as there is more data to interpolate from.
There is also another source of error: the fact that we are solving for bioheat transfer in the air with a small, nonzero thermal conductivity. However, this shouldn't influence the results too much. The main sources of errors are the resolution of the material data and the calculation mesh.
In this blog post, we have shown how to set up a model with coordinatedependent material properties defined in a text file, as well as how to adapt the mesh size to accurately resolve the border between two materials. We also discussed the sources of errors coming into play and how to improve the accuracy of the results. In a modeling scenario where we cannot generate the geometry of highly irregularly shaped objects, this approach can be a real lifesaver.
Try the tutorial model featured in this blog post by clicking the button below. From the Application Gallery, you can log into your COMSOL Access account and download the MPHfile.
The most common type of HPC hardware is a cluster; a bunch of individual computers (often called nodes) connected by a network. Even if there is only one dedicated simulation machine, you can think of it as a onenode cluster.
The COMSOL Reference Manual also calls a single COMSOL Multiphysics process a node. The difference is rarely important, but when it does matter, we will call a computer a physical node or host and an instance of the COMSOL Multiphysics program a compute node or process.
An example of a cluster with four compute nodes.
The work that we want to perform on the cluster is bundled into atomic units, called jobs, that are submitted to the cluster. A job in this context is a study being run with COMSOL Multiphysics.
When you submit a job to a cluster, the cluster does two things:
These tasks are performed by special programs called schedulers and resource managers, respectively. Here, we use the term scheduler interchangeably for both terms, since most programs perform both tasks at once, anyway.
Note that it is possible to submit COMSOL Multiphysics jobs to a cluster using the comsol batch
command (on the Linux® operating system) or comsolbatch.exe
(on the Windows® operating system) in a script that you submit to the cluster. You might prefer this method if you’re already familiar with consolebased access to your cluster. For additional information, please see the COMSOL Knowledge Base article “Running COMSOL® in parallel on clusters“.
In the following sections, we will discuss using the Cluster Computing node to submit and monitor cluster jobs from the COMSOL Desktop® graphical interface.
Whenever I want to configure the Cluster Computing node for a cluster that I am not familiar with yet, I like to start with a simple busbar model. This model solves in a few seconds and is available with any license, which makes testing the cluster computing functionality very easy.
To run the busbar model on a cluster, we add the Cluster Computing node to the main study. We might need to enable Advanced Study Options first, though. To do so, we activate the option in Preferences or click the Show button in the Model Builder toolbar.
Activate Advanced Study Options to enable the Cluster Computing node.
Now the Cluster Computing node can be added to any study by rightclicking the study and selecting Cluster Computing.
Rightclick a study node and select Cluster Computing from the menu to add it to the model.
The default settings for the Cluster Computing node.
If you can’t find the Cluster Computing node, chances are your license is not cluster enabled (such as CPU licenses and academic class kit licenses). In this case, you can contact your sales representative to discuss licensing options.
The most complex part of using the Cluster Computing node is finding the right settings and using it for the first time. Once the node works on your cluster for one model, it is very straightforward to adjust the settings slightly for other simulations.
To store the settings as defaults, you can change the settings under Preferences in the sections Multicore and Cluster Computing and Remote Computing. Alternatively, you can apply the default settings to the Cluster Computing node directly and click the Save icon at the top of the Settings window. It is highly recommended to store the settings as default settings either way, so you do not have to type everything again for the next model.
Discussing all of the possible settings for the Cluster Computing node is out of scope of this blog post, so we will focus on a typical setup. The COMSOL Multiphysics Reference Manual contains additional information. In this blog post, the following is assumed:
These settings are shown in this screenshot:
First, let’s talk about the section labeled Cluster computing settings. Since our cluster uses SLURM® software as its scheduler, we set the Scheduler type to “SLURM”. The following options are SLURM®specific:
On the machine used in this example, we have two queues: “cluster” for jobs of up to 10 physical compute nodes with 64 GB of RAM each and “fatnode” for a single node with 256 GB. Every cluster will have different queues, so ask your cluster administrator what queues to use.
The next field is labeled “Directory”. This is where the solved COMSOL Multiphysics files go on a local computer when the job is finished. This is also where the COMSOL® software will store any intermediate, status, and log files.
The next three fields specify locations on the cluster. Notice that Directory was a Windows® path (since we are working on a Windows® computer here), but these are Linux® paths (since our cluster uses Linux®). Make sure that the kind of path matches the operating system on the local and remote side!
The Server Directory specifies where files should be stored when using cluster computing from a COMSOL Multiphysics session in clientserver mode. When executing cluster computing from a local machine, this setting is not used, so we leave it blank. We do need the external COMSOL batch directory, however. This is where model files, status files, and log files should be kept on the cluster during the simulation. For these paths, be sure to choose a directory that already exists where you have write permissions; for example, some place in your home directory. (See this previous blog post on using clientserver mode for more details.)
The COMSOL installation directory is selfexplanatory and should contain the folders bin
, applications
, and so on. This is usually something like “/usr/local/comsol/v53a/multiphysics/” by default, but it obviously depends on where COMSOL Multiphysics is installed on the cluster.
Remote connection settings.
The next important section is the Remote and Cloud Access tab. This is where we specify how to establish the connection between the local computer and remote cluster.
To connect from a Windows® workstation to a Linux® cluster, we need the thirdparty program PuTTY to act as the SSH client for the COMSOL® software. Make sure to have PuTTY installed and that you can connect to your cluster with it. Also, make sure that you set up passwordfree authentication with a publicprivate key pair. There are many tutorials online on how to do this and your cluster administrator can help you. When this is done, enter the installation directory of PuTTY as the SSH directory and your private key file from the passwordfree authentication in the SSH key file. Set the SSH user to your login name on the cluster.
While SSH is used to log in to the cluster and run commands, SCP is used for file transfer, for example, when transferring model files to or from the cluster. PuTTY uses the same settings for SCP and SSH, so just copy the settings from SSH.
Lastly, enter the address of the cluster under Remote hosts. This may be a host name or an IP address. Remember to also set the Remote OS to the correct operating system on the cluster.
When you are done, you can click the Save icon at the top of the Settings window to start with these settings next time you want to run a remote cluster job.
Another possibility to test whether your cluster settings work is to use the Cluster Setup Validation app, available as of COMSOL Multiphysics version 5.3a.
The settings that change every time you run a study include the model name and the number of physical nodes to use. When you click to run the study, COMSOL Multiphysics begins the process of submitting the job to the cluster. The first step is invisible and involves running SCP to copy the model file to the cluster. The second step is starting the simulation by submitting a job to the scheduler. Once this stage starts, the External Process window automatically appears and informs you of the progress of your simulation on the cluster. During this stage, the COMSOL Desktop® is locked and the software is busy tracking the remote job.
Tracking the progress of the remote job in the External Process window from scheduling the job (top) to Done (bottom).
This process is very similar to how the Batch Sweep node works. In fact, you may recognize the External Process window from using the batch sweep functionality. Just like when using a batch sweep, we can regain control of the GUI by clicking the Detach Job button below the External Process window, to detach the GUI from the remote job. We can later reattach to the same job by clicking the Attach job button, which replaces the Detach job button when we are detached.
Normally, running COMSOL Multiphysics on two machines simultaneously requires two license seats, but you can check the Use batch license option to detach from a remote job and keep editing locally with only one license seat. In fact, you can even submit multiple jobs to the cluster and run them simultaneously, as long as both jobs are just variations of the same model file; i.e., they only differ in their global parameter values. The only restriction is that your local username needs to be identical to the username on the remote cluster so the license manager can tell that the same person is using both licenses. Otherwise, an extra license seat will be consumed, even when the Use batch license option is enabled.
As soon as the simulation is done, you are prompted to open the resulting file:
Once the cluster job has finished, you are prompted to immediately open the solved file.
If you select No, you can still open the file later, because it will have already been downloaded and copied to the directory that was specified in the settings. Let’s have a look at these files:
Files created during the cluster job on the local side.
These files are created and updated as the simulation progresses. COMSOL Multiphysics periodically retrieves each file from the remote cluster to update the status in the Progress window and informs you as soon as the simulation is done. The same files are also present on the remote side:
Files created during the cluster job on the remote side. Note: Colors have been changed from the default color scheme in PuTTY to emphasize MPHfiles.
Here is a rundown of the most relevant file types:
File  Remote Side  Local Side 

backup*.mph  N/A 

* .mph 


*.mph.log 


*.mph.recovery 


*.mph.status 


*.mph.host  N/A 

The busbar model, being so small, is not something that we would want to realistically run on a cluster. After using that example to test the functionality, we can open up any model file, add the Cluster Computing node (populated with the defaults we set before), change the number of nodes and filename, and click Compute. The Run remote options, scheduler type, and all of the associated settings don’t need to be changed again.
What does the COMSOL® software do when we run a model on multiple hosts? How is the work split up? Most algorithms in the software are parallelized, meaning the COMSOL Multiphysics processes on all hosts work together on the same computation. Distributing the work over multiple computers provides more computing resources and can increase performance for many problems.
However, it should be noted that the required communication between cluster nodes can produce a performance bottleneck. How fast the model will solve depends a lot on the model itself, the solver configuration, the quality of the network, and many other factors. You can find more information in this blog series on hybrid modeling.
Another reason to use the hardware power of a cluster is that the total memory that a simulation needs stays approximately constant, but there is more memory among all of the hosts, so the memory needed per host goes down. This allows us to run really large models that would not otherwise be possible to solve on a single computer. In practice, the total memory consumption of the problem goes up slightly, since the COMSOL Multiphysics processes need to track their own data as well as the data they receive from each other (usually much less). Also, the exact amount of memory a process will need is often not predictable, so adding more processes can increase the risk that a single physical node will run out of memory and abort the simulation.
A much easier case is running a distributed parametric sweep. We can speed up the computation by using multiple COMSOL Multiphysics processes and having each work on a different parameter value. We call this type of problem “embarrassingly parallel”, since the nodes do not need to exchange information across the network at all while solving. In this case, if the number of physical nodes is doubled, then ideally the simulation time will be cut in half. The actual speedup is typically not quite this good, as it takes some time to send the model to each node and additional time to copy the results back.
To run a distributed parametric sweep, we need to activate the Distribute parametric sweep option at the bottom of the settings for the parametric sweep. Otherwise, the simulation will run one parameter at a time using all of the cluster nodes, with the parallelization performed on the level of the solver, which is much less efficient.
If you run an auxiliary sweep, you can also check the Distribute parametric solver option in the study step, for example, to run a frequency sweep over many frequencies in parallel using multiple processes on potentially many physical nodes. Note that if you use a continuation method, or if individual simulations depend on each other, then this method of distributing the parameters does not work.
Note: Do not use the Distribute parametric sweep option in the Cluster Computing node itself, as it has been depreciated. It is better to specify this at the parametric sweep directly.
Activate the Distribute parametric sweep option to run each set of parameters on a different node in parallel.
To run a sweep in parallel, we can also use the Cluster Sweep node, which combines the features of the Batch Sweep node with the ability of the Cluster Computing node to run jobs remotely. You can say that a cluster sweep is the remote version of the batch sweep, just like the Cluster Computing node is the remote version of the Batch node. We will discuss cluster sweeps in more detail in a future blog post.
The most important difference to remember is that the Cluster Computing node submits one job for the entire study (even if it contains a sweep), while the Cluster Sweep and Batch Sweep nodes create one job for each set of parameter values.
All of what is covered in this blog post is also available from simulation apps that are run from either COMSOL Multiphysics or COMSOL Server™. An app simply inherits the cluster settings from the model on which it is based.
When running apps from COMSOL Server™, you get access to cluster preferences in the administration web page of COMSOL Server™. You can let your app use these preferences to have the cluster settings hardwired and customized for a particular app. If you wish, you can design your apps so that the user of the app gets access to one or more of the lowlevel cluster settings. For example, in your app’s user interface, you can design a menu or list where users can select between different queues, such as the “cluster” or “fatnode” options mentioned earlier.
Whether you are using a university cluster, a virtual cloud environment, or your own hardware, the Cluster Computing node enables you to easily run your simulations remotely. You don’t usually need an expensive setup for this purpose. In fact, sometimes all you need is a Beowulf cluster for running parametric sweeps while you take care of other tasks locally.
Cluster computing is a powerful tool to speed up your simulations, study detailed and realistic devices, and ultimately help you with your research and development goals.
SLURM is a registered trademark of SchedMD LLC.
Linux is a registered trademark of Linus Torvalds in the U.S. and other countries.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
]]>Topology optimization helps engineers design applications in an optimized manner with respect to certain a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.
A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar largescale geometries.
The governing equation is the standard wave equation with material parameters given in terms of the density and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, . This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.
Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.
Using this approach will work for applications where the socalled thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape.
For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the socalled thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.
Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.
An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾revolution plot.
The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.
Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized NavierStokes equations with quiescent conditions. These equations are implemented in the Thermoviscous Acoustics physics interfaces available in the Acoustics Module addon to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as
where the viscous field is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.
In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as
where the thermal field is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.
As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.
For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no oneequation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.
For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the socalled Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as
(1)
where is the Laplacian in the crosssectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.
In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are
and
These boundary conditions give us insight into how to perform the optimization procedure, since an airsolid interface could be represented by the former boundary condition and an airair interface by the latter. We write the governing equation in a more general matter:
We already know that for air domains, (a_{v},f_{v}) = (1,1), since that gives us the original equation (1). If we instead set a_{v} to a large value so that the gradient term becomes insignificant, and set f_{v} to zero, we get
This corresponds exactly to the boundary condition for noslip boundaries, just as a solidair interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a_{v},f_{v}) should have values of (“large”,0). Thus, we have established our interpolation extremes:
and
I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a_{v} and f_{v} are input.
Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.
The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.
Figure 4: The resulting field with contours for the setup in Figure 3.
The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.
Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:
Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.
A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.
Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.
The normalized acoustic loss for the initial hexagonal geometry and the topologyoptimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.
Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topologyoptimized geometry (solid), respectively.
For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.
This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.
René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.
]]>
Born in 1707 in Basel, Switzerland, Leonhard Euler (pronounced “oiler”) was a prolific mathematician who published more than 800 articles during his lifetime. He studied under the famous Johann Bernoulli and received his master’s degree in philosophy from the University of Basel. Before moving to St. Petersburg, Russia, to work at the university, Euler submitted his first paper to the Paris Academy of Sciences, coming in second place at only 19 years old.
A portrait of Leonhard Euler. Image in the public domain, via Wikimedia Commons.
Euler quickly rose through the academic ranks and in 1733 succeeded Bernoulli as the chair of mathematics in St. Petersburg. Euler moved to Berlin in 1741 at the invitation of King Frederick II. In his 25 years there, he wrote around 380 articles and the first volume of his seminal book Introductio in Analysin Infinitorum, which formally defined functions for the first time; introduced the notation; popularized the and notation; and established the critical formula .
JosephLouis Lagrange (pronounced “luhgronj”) was born Giuseppe Lodovico Lagrangia in Turin. Today, this city is the capital of the region of Piedmont in Italy, but when Lagrange was born in 1736, it was ruled by the Duke of Savoy as part of the Kingdom of Sardinia. Lagrange developed an interest in mathematics and, after working independently on novel topics, began corresponding with Euler, whom he succeeded when Euler left Berlin.
A portrait of JosephLouis Lagrange. Image in the public domain, via Wikimedia Commons.
In Berlin, Lagrange developed most of the mathematics for which he is famous today. He played an important role in the development of variational calculus and came up with the Lagrangian approach to mechanics. Although Lagrangian mechanics makes the same predictions as Newton’s laws of motion, the Lagrangian functional introduced by Lagrange allows the classical mechanics of many problems to be described in a mathematically more straightforward and insightful manner than in Newtonian mechanics. Lagrange also developed the method of Lagrange multipliers, which allows constraints on systems of equations to be introduced easily in a variational approach.
The mathematical formulations of Euler and Lagrange are fundamental to the finite element method, which is used to solve equations in COMSOL Multiphysics.
In the Eulerian method, the dynamics of a system are considered from the viewpoint of an observer measuring the system’s evolution with respect to a fixed system of coordinates. This coordinate system is called the spatial frame in COMSOL Multiphysics. It could be understood to correspond to the laboratory frame in physical analysis, since the system of coordinates is oriented according to a fixed set of axes without any reference to the orientation of the components of the physical system itself.
The figure below illustrates a thin plate of material whose structural mechanics are modeled in a 2D plane. The plate is fixed to a rigid wall at the lefthand side and is deformed under its own weight, as gravity acts downward. With the results plotted in the spatial frame, we see the deformation of the object, as we would expect to observe in the laboratory.
A thin plate fixed to the gray block at the left deforms under its own weight, as viewed in the spatial (lab) frame. The deflection at the tip is about 5 mm for the given mechanical properties.
Formulating physical equations seems very natural in the Eulerian method. Indeed, this is the common formulation for problems such as electromagnetics and fluid physics, in which the field variables are expressed as functions of the fixed coordinates in the spatial frame.
For mechanical problems, though, the Lagrangian method offers a helpful alternative. In the Lagrangian method, the mechanical equations are written with reference to small individual volumes of the material, which will move within an object as it displaces or deforms dynamically. To put it another way, the object itself always appears undeformed from the point of view of the Lagrangian coordinate system, since the latter stays attached to the deforming object and moves with it, but external forces in the surroundings appear to change their orientation from the deforming object’s perspective. The corresponding coordinate system, which moves along with the deforming object, is called the material frame in COMSOL Multiphysics.
A point within the object, as measured in the spatial frame, is displaced from the position of the same point as expressed in the material frame by the mechanical displacement of that point. In the image below, we focus our view on the tip of the deforming plate in the example above and animate its deformation as the density of the object increases so that the weight increases too. As you can see, the material frame coordinate system (red grid and arrows) deforms together with the object, as the object’s dimensions in the spatial frame change. This means that anisotropic material properties — such as mechanical properties of composite materials — can be expressed conveniently in the material frame.
Zoomedin view of the tip of a thin plate deforming under its own weight, as its density is increased. The red grid denotes the material frame coordinates, tied to the object, as viewed in the spatial (lab) frame. The red and green arrows show the x and ycoordinate orientations of the material frame, as viewed in the spatial frame.
In the limit of very small strains for this type of mechanical problem, the spatial and material frames are nearly coincident, because the mechanical displacement is small compared to the object’s size. In this case, it is common to use the “engineering strain” to define the elastic stressstrain relation for the object, and the resulting stressstrain equations are linear. As the mechanical displacement increases, though, the linear approximation used to evaluate the engineering strain is increasingly inaccurate — the exact GreenLagrange strain is required. In COMSOL Multiphysics, the term “geometric nonlinearity” means that the GreenLagrange strain is used.
For further details on the mathematics, see my colleague Henrik Sönnerlind’s blog post on geometric nonlinearity.
Geometric nonlinearity is handled in COMSOL Multiphysics by allowing the spatial frame to be separated from the material frame, according to a frame transformation due to the computed mechanical displacement. It remains convenient to access the material frame to express properties such as anisotropic mechanical material properties, since these properties will usually remain aligned with the material frame coordinates, even as the object deforms.
By contrast, external forces such as gravity have a fixed orientation in the spatial frame. From the perspective of the material frame, external forces like gravity change direction as the object deforms. The image below shows the tip of the thin plate as above, but here, the displacement magnitude is plotted with colors. Arrows are used to illustrate the force due to gravity, as expressed in the material frame coordinates. Since the material frame coordinates remain fixed with respect to the object, the dimensions of the object appear not to change. However, the displacement magnitude increases with the object’s weight and the gravity force increasingly changes direction with respect to the deformed material in conditions of greater deformation.
Zoomedin view of the tip of a thin plate deforming under its own weight as its density increases. The plot is in the material frame as used for the Lagrangian formulation, so the deformation is not apparent, although displacement increases. The red arrows indicate the apparent direction of gravity (which is constant in the spatial frame) as perceived from the material frame of reference within the deforming object.
Neither the Lagrangian nor Eulerian formulation is more “physical” or “correct” than the other. They are simply different mathematical approaches to describing the same phenomena and equations. Through coordinate transformation, we can always transform the physical equations for any phenomenon from the material frame to the spatial frame or vice versa. From the perspective of interpretation and implementation, though, each approach has certain advantages and common applications. Some of these are summarized in the table below:
Strengths  Common Applications  

Eulerian Method 


Lagrangian Method 


What about multiphysics problems, such as fluidstructure interaction (FSI) or geometrically nonlinear electromechanics? In these cases, one physical equation might be formulated most naturally with the Eulerian method, while another might be better expressed with the Lagrangian method. This is where the ALE method comes in. This method solves the equations on a third coordinate system, which is not required to match either the spatial frame or the material frame coordinate systems.
The third coordinate system is called the mesh frame in COMSOL Multiphysics. There is one mathematical mapping between the spatial frame and the underlying mesh frame, and one between the material frame and the underlying mesh frame, so at all points in time, the equations formulated in the spatial and material frames can be transformed into the mesh frame to be solved.
In domains representing solids in a model, mechanical displacement is predicted using structural mechanics equations in the Lagrangian formulation. Here, the relation of the spatial and material frames is given by the mechanical displacement, as above. The ALE method adds more equations to allow the apparent positions and shapes of mesh elements in neighboring domains to displace in the spatial frame. That is in order to account for how mechanical deformation can change the shape of the boundaries of any domain where the physics are described in the Eulerian formulation. These additional equations are called a Moving Mesh or Deformed Geometry in COMSOL Multiphysics.
At boundaries between Lagrangian and Eulerian domains, a boundary condition for these additional equations requires that the displacement of the spatial frame (as defined through the moving mesh) for the Eulerian domain must match the mechanical displacement of the spatial frame away from the material frame in the Lagrangian domain. Even where no mechanical equations are solved, such that no Lagrangian method is used, the ALE method can still be used to express moving boundaries due to deposition or loss of material.
If you find the ALE method quite mathematical, that’s OK! It’s a difficult concept to follow in the abstract. To better understand the way the ALE method works, let’s take a look at an example within COMSOL Multiphysics.
The ALE method plays an important role in modeling FSI. In COMSOL Multiphysics, this method enables the automated bidirectional coupling of fluid flow and structural deformation, a capability demonstrated in our Micropump Mechanism tutorial model.
At the heart of this micropump mechanism are two cantilevers, which perform the same function as valves in conventional pumping devices. These cantilevers are flexible enough that the fluid flow causes them to deform. As fluid is alternately pumped into or out of the channel at the top, the force of the fluid flow causes the two cantilevers to deform so that fluid flows out to the right or in from the left.
The micropump mechanism. Pumping fluid into or out of the top tube produces opposite reactions in the two cantilevers, pushing fluid in or out of the chamber. Even though there is no timeaveraged net flow into the upper tube, there is a timeaveraged net movement of fluid from left to right.
The cantilevers deform enough that there is an appreciable change in the position of the boundary where the fluid and solid meet: a geometrically nonlinear case. The selfconsistent handling of the fluid’s pressure on the solid and the solid’s force on the fluid, together with the deformation of the mesh, are handled automatically by the FluidStructure Interaction interface. The interface employs the ALE method to account for the change in shape in the solid and fluid regions.
For solids, the mechanical equations with geometric nonlinearity define the displacement of the spatial frame with respect to the material frame. In the fluid equations, it’s necessary to deform the mesh on which the equations are solved in order to express the displacement of the solid boundaries in the spatial frame where the fluid equations are formulated. The deformation at the boundaries is controlled by the mechanical displacement from the solution to the structural problem. Within the fluid, though, the exact position or orientation of mesh nodes isn’t important, as the equations are formulated in the fixed spatial frame. Instead, the deformation of the mesh is smoothed in order to ensure that the numerical problem remains stable with highquality mesh elements.
To explain the ALE method for the FSI problem, we could paraphrase a common explanation for general relativity: forces due to fluid flow (Eulerian) tell the structure how to deform in the material frame (Lagrangian), while the structural deformation (Lagrangian) tells the mesh how to move in the spatial frame (Eulerian).
Top: The micropump’s operation, including pressure, flow, and cantilever deformation, as plotted in the spatial frame. Bottom: Mesh deformations calculated by the ALE method.
As of COMSOL Multiphysics version 5.3a, the Moving Mesh feature to define mesh deformation in this type of problem is located under Component > Definitions. This allows consistency in the definition of material and spatial frames between all physics included in a model, even if several physics interfaces are included. The screen capture below shows where these settings are located in the COMSOL Multiphysics Model Builder tree.
Screen capture showing Moving Mesh features under Component > Definitions, and physical coupling between two physics interfaces through Multiphysics > FluidStructure Interaction.
Turning to an electrochemical problem, the Copper Deposition in a Trench tutorial model shows that the ALE method can be vital for simulating electrodeposition problems. In this model, copper is deposited onto a circuit board that has a small “trench”. The deposited copper layer becomes thick compared to the overall size of the trench, so the size and orientation of the copper surface change appreciably as deposition proceeds. Since the rate of copper deposition at different points on this surface is nonuniform, the shape and movement of the boundary cannot be neglected.
A schematic of the physical problem being solved in the electrodeposition model.
To calculate the rate of deposition at a given point on the copper electrodeelectrolyte interface, we need the concentration of the species and the electrolyte potential of the solution adjacent to that point. As the deposition progresses and the boundary moves, the shape of the electrolyte volume has to change continuously. Similarly, the concentration and potential distributions on the altered shape must be recalculated.
The coupling of the deposition rate to the boundary motion rate and the calculation of the changing shape are accomplished with the ALE method and fully automated multiphysics couplings with the Tertiary Current Distribution and Deformed Geometry interfaces. Here, the Deformed Geometry displaces the copper surface in the spatial frame at a rate proportional to the local current density for electrodeposition, as computed from the electrochemical interface.
With this model, we can accurately account for the deposition process in order to optimize its parameters. We can also experiment with different applied potentials and deposition surface geometries to improve the uniformity of the deposition, which produces a more efficient process and a higherquality end product.
Animations showing the evolution of the deposition process in time. It is clear that the deposition happens unevenly, resulting in a pinching of the trench opening at its top.
Thermal ablation, discussed in this previous blog post, involves a very high temperature applied to an object, causing the surface to melt and vaporize. Examples of thermal ablation include the removal of material by lasers — such as in the etching process, laser drilling, or laser eye surgery — and a spacecraft’s heat shield as it reenters the atmosphere.
Animation showing the effect of thermal ablation on a material.
Since we expect that an object’s shape will change when some of its material is removed, deforming meshes are clearly a key part of thermal ablation simulation. What we need to know is how the shape of the object will change. This depends on how we balance the applied heat with heat lost to ablation and heat dissipation throughout the structure by mechanisms such as conduction.
To obtain this information, we can predict the temperature profile as a function of space and time by solving the heat transfer equations using the Heat Transfer interface. Because the mass and shape of the object are changing, the Heat Transfer interface is coupled to a Deformed Geometry interface, using the ALE method to displace the boundary according to the rate of ablation. The Heat Transfer equations predict the temperature distribution in the object as its shape evolves.
By performing these steps, we can attain accurate calculations for the thermal ablation process. Moreover, we can determine the final shape of the object after ablation is complete. This might enable us to check whether a laser weld will fall within acceptable tolerances or whether a spacecraft will survive an emergency landing.
The contributions of Leonhard Euler and JosephLouis Lagrange in the field of mathematics have paved the way for simulating a variety of systems involving multiphysics applications. The combination of their individual methods has led to the development of the ALE method, which can be used to predict physical behavior when objects deform or displace. By properly accounting for these movements, you can set up highly accurate models. Remember to thank Euler and Lagrange as you investigate these and other models that exploit the ALE method!
The ALE method is one of many builtin physics capabilities in the COMSOL Multiphysics® software. See more of them:
As children, many of us took eye examinations to determine if we had CVD. These tests can involve showing the subjects pseudoisochromatic plates, which are circles made up of dots of different colors and sizes. The ability to differentiate between the dots and see a number in the middle of the plate indicates that a person does not have CVD.
An illustrative example of a color vision test. The numbers included are 5 and 3.
However, for the 1 in 12 men and 1 in 200 women with CVD, these symbols are hidden in a colorful camouflage. This is because CVD makes it difficult to distinguish between particular colors.
In the scientific community, this poses a problem because color tables are often used to help visualize results and present data in a way that is meant to be easily understood. To do so, color tables use arrays of colors in a predefined order, with each color representing a different value. Some color tables, such as rainbow (the default color table in many different software), use a wide range of colors. For engineers with CVD, the colors used in these tables can cause data misinterpretations, possibly obscuring key results and findings.
For example, take a look at the images below. These show that results visualized with a rainbow color table (left) can appear completely different to a person with CVD (middle). In this case, which simulates redgreen CVD or deuteranopia, the bright red from the rainbow color table would be interpreted as a darker grayyellow by engineers with CVD, which could result in data misinterpretation. One solution to this problem would be to generate a new color table so that the results can be interpreted correctly by engineers with CVD (right).
Different visualizations of a mixer model.
Rainbow color tables are not only problematic for people with CVD; they pose issues for people without CVD, too. Jamie Nuñez, one member of a research team from the Pacific Northwest National Laboratory (PNNL), explained that rainbow color tables introduce artifacts due to their uneven change between colors and their lack of a ramp in lightness (i.e., lightness steadily increases from one end of the color table to the other). This can cause the appearance of significant (or insignificant) regions when the opposite is true.
Nuñez also noted that although we can compare different regions by including a color table next to an image, unnecessarily complex color tables just slow down interpretation and can lead to incorrect conclusions.
In addition, despite the prevalence of tests for CVD, it is possible to have a CVD and be unaware of it. This happens because people learn what colors certain objects are supposed to be from an early age. And regardless of if someone is actually seeing the same color as another person, they will call it the same thing. This means that whether we know it or not, we may be incorrectly perceiving the colors used in the results of simulation and engineering projects.
Due to these issues, Nuñez and her fellow PNNL researchers Dr. Ryan Renslow and Dr. Christopher Anderton came to a realization: There has long been a need to move away from rainbow color tables. Therefore, the team decided to make an optimized color table that could be used throughout the scientific community.
To create an optimized color table for engineers with CVD, one option is to use grayscale. However, presenting results as grayscale images comes with its own set of issues. Namely, people have more difficulty distinguishing between different shades of gray and are less able to observe subtle changes when these tables are used.
Instead, the PNNL team created Cividis, a color table that is optimized with CVDs in mind.
The Cividis color table helps people with CVD accurately interpret simulation results, such as the sound pressure level of this loudspeaker.
For the PNNL team, the underlying goal of Cividis was to create a color table that optimizes the viewing of scalar data for people with and without CVD. In essence, the color table should have the highest accurate representation of data for the maximum number of people possible.
Achieving these goals wasn’t easy, as the team had to develop code to optimize color tables, which Nuñez mentioned was their greatest challenge. While the team knew its goals and the steps involved, actually understanding how to accomplish them without having to manually tweak anything was challenging. In addition, gathering and interpreting relevant color theory information took quite a bit of work.
In the end, the team overcame these challenges and created Cividis by optimizing the Viridis color table, which is seen as the current gold standard of color tables, but is not optimized for those with CVD. Cividis includes various shades of blue and yellow to create a userfriendly color table for people with and without CVD. What’s more, the PNNL team decided to share Cividis with COMSOL so that users of the COMSOL Multiphysics® software can easily access it for their own simulations. This color table is now available as of version 5.3a of COMSOL Multiphysics.
The temperature profile in a heat sink (right color legend) and in the air around the heat sink (left color legend). This model was visualized using a rainbow color table.
This image depicts how people with deuteranopia perceive the rainbow color table results.
In this version of the model, the results are visualized using the Cividis color table. By swapping out the rainbow color table (top) for Cividis, engineers with CVD can more easily analyze the temperature field and avoid potential data interpretation artifacts.
According to the PNNL team, Cividis offers three primary advantages to users of COMSOL Multiphysics.
First of all, it provides a perceptually uniform change in color and constant ramp in lightness. This means that the colors in Cividis change smoothly over the color table, with brighter colors representing higher values and vice versa. This is beneficial, Nuñez explains, as Cividis is very intuitive for how the different colors within the color table compare to each other. That makes it easy for others to understand how different values in an image compare and allows truly significant regions to stand out.
Additionally, the wide range of colors used in Cividis prevents the issues that come with using a grayscale. Finally, although Cividis has been specifically tested for use with redgreen color deficiencies, the most common CVD, it can be used by engineers with and without CVD. This is because Cividis looks the same to engineers with normal color vision, a deuteranomaly, or deuteranopia.
Using Cividis to model a Karmanvortex street behind a sphere subjected to a flow.
While we at COMSOL find the Cividis color scheme to be aesthetically pleasing (perhaps reminiscent of the colors of a moonlit sky), some testers found the color table to be unattractive due to a lack of color changes. To tackle this issue, the PNNL team plans to use the tools they created to optimize Cividis and create another optimized color table that cycles through more colors while remaining optimal for varying severities of deuteranomaly…so, stay tuned!
Moving forward, the team feels that in order for the scientific community as a whole to shift toward the use of optimized color tables such as Cividis, it needs to be easy to understand their importance and add them to software. This is why the PNNL team shared Cividis with COMSOL. They also plan on making all of their materials free and widely available. Nuñez says that their goal was to make this color table — along with the code used to generate it and the paper written discussing its design — available to everyone to help aid in the problem the team had identified.
Nuñez, Renslow, and Anderton hope that their work will increase the awareness and availability of CVDfriendly color tables, helping around 600 million people worldwide with CVD.
With equationbased modeling, part of the core functionality of COMSOL Multiphysics, you can create your own model definitions based on mathematical equations and directly input them into the software’s graphical user interface (GUI).
These abilities give you complete control over your model, so you can tailor it to your exact specifications and add complexity as needed. To provide this flexibility, COMSOL Multiphysics uses a builtin interpreter that interprets equations, expressions, and other mathematical descriptions before producing a model. In addition, you can use tools like the Physics Builder to create your own physics interfaces, or the Application Builder to create entire new user interfaces.
Example of entering a custom partial differential equation into the COMSOL Multiphysics GUI.
Using this functionality, you can work with:
There is no limit to how creative you can be when setting up and solving your models with equationbased modeling, which expands what you can achieve with simulation. To show this functionality in action, let’s take a look at three examples…
In 1895, the Kortewegde Vries (KdV) equation was created as a means to model water waves. Since the equation doesn’t introduce dissipation, the waves travel seemingly forever. These waves are now called solitons, which are seen as single “humps” that can travel over long distances without altering their shape or speed.
Today, engineers use the KdV equation to understand light waves. As a result, one of the main modern applications of solitons is in optical fibers.
To solve the KdV equation in COMSOL Multiphysics, users can add PDEs and ODEs into the software interface via mathematical expressions and coefficient matching. It’s also possible to easily define dependent variables and identify coefficients via the General Form PDE interface.
With this setup, users are able to model an initial pulse in an optical fiber and the resulting waves or solitons. According to the KdV equation, the speed of the pulse should determine both its amplitude and width, which can be observed via simulation. In addition, the simulation reveals that, just like with linear waves, solitons can collide and reappear while maintaining their shape. This counterintuitive finding would be challenging to observe without simulation.
If you want to learn more about this example, see the KdV equation model in the Application Gallery.
Simulation showing how solitons maintain an intact shape when colliding and reappearing.
Moving on to a medical example, let’s see how simulation can be used to understand the rhythmic patterns of contractions and dilations in a heart. The rhythmic contractions are triggered when the heart passes an ionic current through the muscle. During this process, ions flow through small pores that exist in an excitation (open) or rest (closed) state within the cellular membrane. As such, to gain a better understanding of heart patterns, the electrical activity in cardiac tissue needs to be examined.
Studying the electrical signals in a heart is not a simple process and involves modeling excitable media. To address this challenge, users can implement two sets of equations to describe various aspects of the electrical signal propagation. One such example is the Electrical Signals in a Heart model, provided through the courtesy of Dr. Christian Cherubini and Prof. Simonetta Filippi from the Campus BioMedico University of Rome in Italy. The equations used in this model, FitzHughNagumo and complex GinzburgLandau, are included in the PDE interfaces available in COMSOL Multiphysics.
By using the FitzHughNagumo equations to simulate excitable media, it is possible to create a simple physiological heart model with two variables: an activator (corresponding to the electric potential) and inhibitor (the voltagedependent probability that the membrane’s pores are open and can transmit ionic current). Using these equations and various parameters, users can visualize a reentrant wave that moves around the tissue without damping, which results in a characteristic spiral pattern. In the context of electrical signals, this pattern could generate effects similar to those of arrhythmia, a condition that disturbs the normal pulse of a heart.
Solving the FitzHughNagumo equations at times of 120 (left) and 500 (right) seconds.
The complex GinzburgLandau equations help to model some parts of the transition from periodic oscillatory behavior to a chaotic state. During this transition, the amplitude of oscillations gradually increases and the periodicity decreases. These equations are used to study the dynamics of spiral waves in excitable media. The results show the diffusing species and the characteristic spiral patterns, which increase in complexity over time.
Solving the complex GinzburgLandau equations at times of 45 (left) and 75 (right) seconds.
Using both sets of equations enables the visualization of complicated realworld phenomena.
Lastly, let’s take a look at the Lorenz equations, which were developed to serve as a simple mathematical model for atmospheric convection. When using certain parameter values and initial conditions, a system of ODEs (a Lorenz system) has chaotic solutions. One such solution is a Lorenz attractor, which looks like a figure eight or butterfly when plotted in the phase space.
Example of the typical shape of a Lorenz attractor.
To solve the Lorenz attractor model, the Lorenz equations — a system of three coupled ODEs that contain three degrees of freedom — need to be added into the software. This is a straightforward process when using the Global ODEs and DAEs interface to define the Lorenz system.
Next, users can view an initial solution close to the attractor and study the growth of a very small perturbation to this initial data. The results (seen in the left image below) visualize how the difference between the original and perturbed problems increases over time. In addition, the simulation demonstrates that with the chosen parameter values, the Lorenz system behaves like a Lorenz attractor, with results showing the butterfly shape that these attractors are known for.
The differences between the unperturbed and perturbed solutions over time (left) and the normal pattern for a Lorenz attractor (right).
Watch a quick video introduction to COMSOL Multiphysics to learn more about the software’s key features. When you’re ready, request a software demonstration.
]]>
In the previous blog post, we discussed how to loft a geometry from imported curve data using the human head as the example of an irregular shape. Today, let’s discuss the Matterhorn, a European mountain, as an example of an irregular shape. This mountain of the Alps, which borders Switzerland and Italy, has a summit of 4478 meters (or 14,692 feet).
The east and north faces of the Matterhorn. Photo from camptocamp.org. Licensed under CC BYSA 3.0, via Wikimedia Commons.
Height data is the typical format we have when describing geographical data. Today, we’ll discuss how to import elevation data to model the irregular shape of the Matterhorn’s surface. In short, the procedure includes:
Now, let’s take a look at how to create a solid geometry of the Matterhorn in COMSOL Multiphysics.
We will use both a text file and grayscale image of the mountain’s height to create a model geometry resembling the Matterhorn. The text file is imported in an Interpolation function, while the picture is imported in an Image function. We’ll also briefly cover importing a DEM file into an Elevation function, but this is not included in the example MPHfile that can be downloaded at the end of this blog post.
In the Image function, we specify the actual maximum and minimum values in the x and y directions, as the picture only contains information on the number of pixels and the color of each pixel. As the dimensions of the geometry are 2000 meters in size, the minimum and maximum values of x and y are set to 1000 m and 1000 m, respectively. Note that if the functions are used in the definitions of the materials or physics, it is also possible to add the units of the arguments and the function.
The Settings windows of the Interpolation function (left) and the Image function (right). The size and position of the region are defined by the text file used in the Interpolation function, while the actual size of the region must be set for the Image function.
A plot showing the Interpolation function of the imported text file (left). Imported data: DHM25 © swisstopo. The color bar values represent the actual height of the mountain. The grayscale image (right) shows the height of the mountain. Note that the color bar is normalized to go from 0 to 1.
If the geographical data is in a DEM file, it is more suitable to create an Elevation (DEM) function. If the region specified in the DEM file isn’t rectangular, we can specify a height to use outside this region in the Replace missing data with edit field. In the example below, the height of the surface is set to 0 m.
An Elevation (DEM) function is used whenever a DEM file is imported. Enter a value in the Replace missing data with edit field if the region defined in the file doesn’t fill up a rectangular area. The default value is set to 0 m.
As the underlying data is now available in the model, let’s move on to creating the actual shape of the mountaintop. We use a Parametric Surface feature for this purpose.
The Parametric Surface feature is found under More Primitives in the Geometry ribbon.
The procedure is quite easy when a DEM file is imported, as we can just click the Create Surface button. This sets up a Parametric Surface feature with the maximum and minimum values in the parameter directions from the DEM file already filled in.
To create a parametric surface based on an imported DEM file, click the Create Surface button.
As the functions are slightly different, the expressions used will also differ. It is recommended to let the two parameters (s1 and s2) go from 0 to 1, so to get the actual dimensions in the final geometry, we need to reparameterize the x, y, and zexpressions.
For the Interpolation function, which is defined using the real dimensions of the Matterhorn, the expressions will look like those shown below. One way of obtaining the maximum and minimum values in the x and y direction is to first build the Parametric Surface without rescaling the expressions and then measure the x and y positions of the corners of the created surface. An alternative is to import the coordinate data into a spreadsheet editor, where it is possible to rearrange the coordinates in increasing order.
x: s1*(6.18e56.16e5) m
y: s2*(9.27e49.07e4) m
z: int1(s1*(6.18e56.16e5)+6.16e5, s2*(9.27e49.07e4)+9.07e4) m
The expressions in the Image function, in which x and yvalues go from 1000 m to 1000 m and output values go from 0 to 1, will instead look like this:
x: s1*2000 m
y: s2*2000 m
z: (44783000)*im1(s1*20001000,s2*20001000) m
Note that we also need to scale the values in the z direction when using an Image function, as it is normalized to go from 0 to 1. In the Settings windows shown below, you can see that the z position changes to 3000 to translate the surface to the correct position in space.
To get a better representation of the surface, the Maximum number of knots is increased to 300 (the default value is 20). This means that the rectangular area will be divided into, at maximum, 300 pieces in both parameter directions, creating patches. The more knots that are allowed, the more flexibility is given to adjust the patches to the given z expression, thereby improving the chances of achieving a tighter relative tolerance.
The algorithm starts by dividing the whole area into a smaller number of patches and then increasing the number of patches where the error is large. By allowing a larger number of knots, the relative error between the patch placements and the actual data points is decreased. The algorithm tries to reach the set Relative tolerance (default value is 1.0E6) by adding more knots.
When it isn’t possible to reach the tolerance, which can happen if the Maximum number of knots is set too low, a warning will be issued stating which tolerance has been used to build the surface. To remove the warning, copy the tolerance from the Warning node and paste it into the Parametric Surface feature and build it once more.
In the examples used here, the Relative tolerance is manually set to 0.002. If the number of knots is too large, it will result in a heavy geometry operation when creating the surface. There is a balance between using enough knots to get a small relative error and keeping the number of knots low enough so that the operation completes in a reasonable time. Sometimes, a smoother surface is a desired outcome, for instance, if the surface definition contains noise. In that case, reducing the Maximum number of knots will provide a surface that does not follow the noise too closely.
Settings windows of the two Parametric Surface features. The expressions have been reparameterized to keep the two parameters normalized. An increased Maximum number of knots is used to get a better representation of the surfaces.
Regardless of which method we have followed, we should now have a geometric surface object that represents the surface of the Matterhorn. However, in most simulations, a solid domain is needed. To do so, we add a Block with a size and location so that the Parametric Surface intersects the block.
The two geometry objects are then added to a Convert to Solid feature. The Convert to Solid operation creates a union of the block and surface, and in addition, it removes any parts of the surface that are sticking out of the block. In this case, where the block perfectly fits the outer edges of the surface, we could also use a Union operation and it will work just as well. Combining the surface and the block results in a solid object that consists of two domains separated by the surface of the Matterhorn.
Resulting geometries after building the Convert to Solid feature. The image to the left shows the irregular surface based on interpolated data from text file and the righthand image shows the one based on a grayscale image.
The procedure described in this blog post can be used to create a sandwichtype geometry, where the imported surfaces separate different materials, for example, if you want to take a look at the stresses in the layers of rock with different properties. In this case, follow the same procedure to generate each surface and include them all in the Convert to Solid feature.
We now have a geometry of the mountain that we can use for meshing and simulation. However, if we are only interested in an analysis of the rock, we can easily remove the upper domain that represents the air. A Delete Entities feature can be used to remove the air domain by setting the Geometric entity level to Domain and adding “domain 2″ to the selection. Now, if we rotate the mountain, we can see the resemblance to the photo of the Matterhorn shown at the beginning of the post.
The final geometries created using a text file (left) and an image (right) as input. Imported data: DHM25 © swisstopo.
Even though the two mountaintop geometries are very much alike, they still differ from each other. If meshed with equal mesh sizes, they will give slightly different meshes. This is in part due to the fact that the Interpolation and Image functions give slightly different inputs to the Parametric Surface feature.
The Parametric Surface feature itself also makes an interpolation when it adapts the surface to the knots described above, so there are two interpolations involved here. However, as long as the size of the mesh is larger than the error of the two mentioned interpolations, it will be an adequate approximation of the imported data.
Irregular shapes can also come in other types of file formats. In a previous blog post, we discuss how to create geometries out of imported meshes. Next in this series, we will demonstrate how to interpolate material data on a regularshaped domain.
Download the file used to create the example featured here via the button below.
]]>
A unit is a specific, established quantity or property of which the magnitude of that quantity can be expressed as multiples of that unit of measurement. Units of different quantities form a unit system. Over time, many unit systems have lost their significance, leaving two dominant systems: metric and English.
The International System of Units (SI) is derived from the metric system and is now the global standard. The United States, Myanmar, and Liberia are the only countries that have not fully converted to the SI. Although the majority of countries shifted to SI units for professional use, many traditional units are still in use at the local level.
The other unit system is the Imperial system, or the United States Customary Unit System, which is the modern form of English unit systems. (Although different from the Imperial system, the United States Customary Unit System will be called the Imperial system in this blog post for generality.)
Why is the distinction between unit systems important? Let’s take a look at two historical disasters to find out.
In the 17th century, King Gustav II Adolf of Sweden aimed to make his country one of the most admirable military powers in Europe. He contracted a Dutch ship building company to build four powerful ships. One of them was the Vasa Ship, which was slated to be the most powerful warship in the Baltic Sea for that era. In 1628, Vasa set off on its first journey — and sank about 1700 meters from the shipyard, just 1300 meters into its voyage.
The Vasa Ship is on display at the Vasa Museum in Stockholm, Sweden.
According to research on the Vasa Ship from as recently as 2012, a main cause of the disaster was an asymmetric ship structure: The ship was heavier on the port side than the starboard side. Investigators found that four rulers used by Vasa workmen had different standards of measurement. Two rulers used the Swedish unit for feet, with 1 foot equal to 12 inches. The other two rulers used the Dutch unit for feet, in which 1 foot equals 11 inches. By failing to convert the units into a standard unit system, the ship builders caused asymmetry in the ship structure, which was one of the main reasons it sank.
Jumping forward in time, the Mars Climate Orbiter is a famous example of a space exploration failure caused by using two different unit systems. NASA launched the Mars Climate Orbiter in December 1998 to study the climate of Mars. However, in September 1999, contact was lost with the orbiter and the mission was declared a failure.
NASA investigated what went wrong and detailed eight contributing factors in the failure report. They underlined the main cause as the failed translation of Imperial units to SI units. During the orbital insertion maneuver, the intended height of the orbiter from the Mars surface was 110 kilometers. However, it was placed on a trajectory that would have placed the orbiter at a height of 57 kilometers from the Mars surface, which caused the orbiter to come in contact with Mars’ atmosphere and disintegrate.
As it turns out, the software used to calculate the impulse necessary for maneuvering used the Imperial unit system, providing the data in poundsseconds. The software used to calculate the orbiter’s trajectory interpreted this data in the SI as Newtonseconds. This put the orbiter on the wrong trajectory.
COMSOL Multiphysics supports different unit systems and enables easy and accurate conversion between them. At the model root level (or root node), the COMSOL Multiphysics software enables you to select an appropriate unit system and its different variations:
To demonstrate the capabilities of COMSOL Multiphysics for unit systems, we use a model from the Application Library: the Tapered Cantilever with Two Load Cases.
The tapered cantilever model and different unit systems in COMSOL Multiphysics.
The choice of unit system does not constrain you to use units from the selected system. Instead, the chosen unit system is applied by default to physical quantities where no unit is specifically written. For example, by selecting SI for Unit System, you can still apply pressure in MPa by writing [MPa]
after a value. Or, prescribe displacement in inches by writing [in]
after a value. This flexibility enables you to work with different unit systems at the same time.
Take a look at the Boundary Load feature in the tutorial model. The left image below shows the boundary load in the x direction as 10[MN/m]
in SI units. However, the same load can be provided in British engineering units; i.e., Poundforce as 57101.47 [lbf/in]
. By solving the model with an equivalent boundary load in British engineering units, you get exactly same result as the SI.
The boundary load in SI units (left) and the boundary load in British engineering units (right).
You can enter material data in a unit system other than the default by simply writing units in brackets. The same logic applies to the Geometry node.
Young’s Modulus in GPa in the Material node (left) and the length in feet in the Geometry node (right).
What does the COMSOL® software do if you assign an incorrect or unexpected unit? In this case, the input display appears orange for the physics interface, physics features, and materials. An inconsistent unit can occur by summing terms with units that represent different physical quantities, such as: 273[K] + 3[ft]. A tooltip displays a message in the corresponding field.
For valid but unexpected units, the message contains the deduced and expected units in the current unit system. For a Boundary Load feature in the model discussed above, if the load is entered in kilograms, then the display of the load’s input field is orange and shows a warning message when the pointer is moved over the text. In this case, a boundary load given as 10[kg] is actually treated as 10 without units, which in the given unit system means 10[N/m].
The warning message for the unexpected unit in the Boundary Load feature.
At the Geometry node, you can specify scaled or prefix metric units of length as well as the angle in degrees or radians. For specific applications, like MEMS, you may need geometry units in micrometers instead of meters, which is the default setting. However, you have to note that the length unit for the geometry does not affect the units that include the length in the physics interfaces or any other part of COMSOL Multiphysics. The material properties from the Material Library are in SI units by default, so changing the unit system of the model does not change the material data automatically.
In COMSOL Multiphysics, parameters and variables (under Definitions) can be defined in any units by writing the unit in brackets in front of numbers. The same logic applies to functions (under Definitions), where units of arguments and functions can be specified separately. The variable, parameters, and functions of different units than the model’s unit system can be used in material data and physics features.
COMSOL Multiphysics includes postprocessing tools for even more power and flexibility, enabling you to see your results in different unit systems or different scaled/prefix units of the same unit system. For example, the default plot of von Mises stress in the Solid Mechanics interface can be visualized with 19 different units. However, when the unit system of the model is changed to the British engineering unit system, the von Mises stress can be visualized in 49 different units.
There are many examples that show the adaptability of the COMSOL software. For example, in COMSOL Multiphysics, both names and symbols can be used for a unit. You can denote an electric current in SI units as either 2.4[ampere] or 2.4[A].
In this blog post, we’ve demonstrated how the COMSOL Multiphysics software deals with different unit systems, the flexibility of using and mixing units from different unit systems, and how to handle unexpected units. This functionality empowers you with more modeling flexibility and reduces chances of error while dealing with different unit systems or derived/prefix units of the same system.
Want to see how the flexible unit functionality of COMSOL Multiphysics can benefit your modeling and simulation? Learn more about COMSOL Multiphysics via the button above and when you’re ready, request a software demonstration.