This blog post discusses *Integrative Engineering: A Computational Approach to Biomedical Problems*, including its content and the intended readers; the purpose and motivation for writing this book; and most importantly, how getting an in-depth understanding of the “intricate machinery” of computational modeling will help make you a better modeler.

*Integrative Engineering: A Computational Approach to Biomedical Problems* is based on my years of firsthand experience in teaching computational modeling of multidisciplinary problems with the motivation to encourage transdisciplinary learning, integrative thinking, and holistic problem-solving. Hence, the textbook reflects the ideas I have developed over the years in my journey of searching for a practical tactic for updating engineering curricula and making it relevant to our time.

I developed this book in the spirit of encouraging integrative learning, questioning, hypothesis testing, problem-solving, inventing, designing, and prototyping. I also want to encourage learning and researching by linking commonalities across compartmentalized disciplines based on the underlying mathematics, in order to generate novel solutions and cultivate a sense of limitless possibility in engineering research and industrial R&D activities.

In a way, this motivation has something to do with COMSOL Multiphysics®, a software I started using quite some time ago. I still remember my early reluctance toward using this so-called *equation-based modeling software*. At the time, all of my previous finite element modeling (FEM) experiences gave me an impression that seeing equations is what a program developer needs to do. A program user would not need to concern him/herself with this. COMSOL Multiphysics® was the FEM program that forced me to see the underlying governing equations.

The more equations I see, the more I find commonality in the governing equations for different physical phenomena. This has led me to believe that an integrative engineering approach based on multiphysics modeling is possible. For example, in one chapter of the book, I detail the procedure for the development of differential equations for different engineering problems — including mechanical, heat, mass transport, and wave propagation — to demonstrate that differential equations can be of the same mathematical type, even though they govern problems of different physics. This highlights the fact that for countless real-world problems, we may only need to deal with limited types of governing differential equations.

Because of my adaptation of this equation-based concept, the book is not like any other book on FEM and biomedical applications, although it discusses procedures used in the finite element method. It introduces a computational modeling approach based on facilitating integrative learning through consolidating commonalities in various disciplines and gaining a deep understanding of how this “intricate machinery” of computational modeling operates.

This book is written for undergraduate and graduate students in engineering and applied sciences, as well as practicing engineers in industry and R&D labs. As a valuable resource for finding and formulating solution ideas from complementary fields of engineering, this book will be useful not only for novice modelers, but also experienced simulation engineers.

*Examples of computational models for biomedical applications. An image-based modeling example for a denture with and without reinforcing wires (top) and a CAD-based modeling example for a spine fixation device (bottom).*

*Integrative Engineering: A Computational Approach to Biomedical Problems* is structured in four parts:

- Part I presents the rationale for converting from a compartmentalized disciplinary to a transdisciplinary approach in education and argues for promoting integrative, rather than reductive, learning.
- Part II provides a systemic discussion of the hands-on details of computational modeling procedures.
- Part III discusses the environments of common modeling software and explains the connections between software settings and engineering fundamentals. Furthermore, methods to develop hands-on practical skills in performing computational modeling, practical issues concerning image-based modeling, as well as the standardization and regulatory processes are discussed.
- Part IV provides useful knowledge in mechanics of materials and mathematics for “just in time” learning and referencing materials.

The textbook aims to lay down some groundwork toward what will be a long journey of restructuring the engineering curriculum with the assistance of a computational-modeling-based investigative tool. In future editions of this book, more integrated problems will be presented and discussed as case studies.

What do I mean by *restructuring the engineering curriculum*? In most universities, an engineering curriculum is a four-year program. When we talk about encouraging students to embark on transdisciplinary learning, we often encounter the argument that we are producing “jacks of all trades and masters of none”. In this book, readers will learn how an integrative engineering approach assisted by interactive computational modeling can help not only reduce many of the redundancies in the existing curricula, but also provide meaningful visualization of scientific principles at work in real-world applications. Furthermore, readers will also learn the difference between learning *how* and learning *that*, which especially helped solidify my vision for a new engineering curriculum. In my belief, it will surely make a big difference in readers’ views toward learning, too.

The idea of going beyond learning *that* and learning *how* is key to incorporating relevant humanities content in engineering curricula. Doing so will help train engineers who are not only technically competent, but also conscious of social needs, so that they can innovate not just for technological pursuits, but also for humanity. With an integrative approach, we will be able to do R&D in a much more effective and scientifically guided way, rather than the traditional trial-and-error way.

The book encourages learning *how* by emphasizing the notion that modeling engineering problems is solving partial differential equations (PDEs) through computational means. It introduces a systematic look at the “black box” of how engineering knowledge is expressed mathematically and examines the ways in which differential equations are solved by computer-based approximate methods:

- Domain discretization
- Field quantity interpolations
- Relationships between the types of elements and order of polynomial functions for discretization
- Weighted integral of residue evaluations
- Reduction of strong form PDEs to weak form PDEs
- Linearization of differential equations into matrix algebraic equations
- The benefit of isoparametric elements
- Gauss quadrature and numerical integrations
- The minimization of approximation errors

The book discusses domain discretization using various elements in detail. You will learn about shape function developments using Lagrange interpolation formulas for a full spectrum of Lagrange elements (you will not find this feature in any other books) including 1D bar elements, 2D rectangular and triangular elements, 3D hexahedral and tetrahedral elements, and using Hermite interpolation formulas for beam elements. The origin and meaning of the two interpolation formulas are also included in the book.

*Turning integration into multiplication using Gauss quadrature: A case of 3-point Gauss quadrature.*

For example, see what different types of discretization mean and how they are related to the order of polynomial interpolation equations:

*Shape functions for 6-node quadratic and 10-node cubic triangle elements. Quadratic elements use second-order polynomial equations for domain discretization and cubic elements use third-order polynomial equations for domain discretization.*

You will also see the difference between a plane-stress and a plane-strain situation in a 2D solid mechanics problem:

*A plane-stress situation (left) versus a plane-strain situation (right).*

Other topics include the meaning of the order and integration points and the importance of performing a convergence study with mesh refinement to assure the soundness of the modeling results:

*Mesh refinement is crucial for obtaining converged modeling results.*

In addition to these important fundamentals, real-world modeling problems are also discussed, like the denture and spine device models pictured in the first section.

Through hands-on experiences gained by problem-solving assignments in the process of learning *how*, students will see not only the feasibility, but also the practicality of solutions in a holistic way by taking advantage of computational tools. With this approach, real-world problems in a variety of domains, either individually or combined, can be dealt with in a transdisciplinary way.

For COMSOL Multiphysics® users, getting an in-depth look at the “intricate machinery” of computational modeling and its operations will certainly enhance programming skills. To make it more interesting to study, I also strive to add artistic beauty to graphic illustrations throughout the book by creating them myself using mathematical equations with open-source-code LaTex and the companion packages such as Ti*k*Z and PGF. This allows me to show the physics and engineering concepts from a teacher’s perspective.

To learn more about *Introduction to Integrative Engineering: A Computational Approach to Biomedical Problems* or to purchase the book, click on the button below.

Guigen Zhang is a professor and associate chair of the Department of Bioengineering and professor of the Department of Electrical and Computer Engineering at Clemson University. He is a fellow of the American Institute for Medical and Biological Engineering.

Dr. Zhang has been invited to serve on expert review panels by the NIH, NSF, and other U.S. and Canadian agencies on subjects covering nanotechnology, biomaterials and biointerfaces, biotechnology, sensors and biosensors, nanoscale drug delivery, the neurotechnology nexus, and bioengineering research partnerships, among others. He has also been a keynote and invited speaker at numerous national and international professional conferences, including the Venture Conference in Switzerland and the OECD Conference in France. Professor Zhang is also active in serving prominent leadership roles in professional societies. He is currently the executive editor of the Biomaterials Forum of the Society For Biomaterials and president of the Institute of Biological Engineering.

]]>

To demonstrate this functionality, just like in the previous blog post, we will first load the Micromixer tutorial model from the Application Libraries. This model is available in the folder *COMSOL Multiphysics* > *Fluid Dynamics* and illustrates fluid flow and mass transport in a laminar static mixer.

The model performs a fluid flow simulation using a *Laminar Flow* interface. In the next step, it shows how to calculate the mixing efficiency by means of a *Transport of Diluted Species* interface, using the results from the fluid flow simulation as input. The species will be transported downstream based on the fluid velocity.

The computation time for this model is a few minutes. In the previous blog post, we made the computations go faster by not solving for the *Transport of Diluted Species* part. However, this time around, we will need the concentration profile throughout the mixer. To run the computation quicker in this case, you can set the *Predefined Element Size* to *Extremely coarse*.

For this example, the step of coarsening the mesh is optional and everything that follows will work even without this change.

Let’s now see how to use a parameterized slice plot together with an animation to export an entire series of images, where each image corresponds to a single slice.

This is the default plot for the concentration at 5 different *yz*-plane slices in the *x* direction in a solved version of the library model:

You can get a slightly improved visualization by setting the *Quality Resolution* setting to *Extra fine*, like so:

Instead of the default 5 evenly spaced slices in the *Slice* plot for the *Concentration*, you can change the *Plane Data* entry method to *Coordinates*. For example, you can generate a single slice at 0.5 mm, as follows.

The resulting plot is shown in the figure below:

We can parameterize the location of the slices by means of a parameter. Right-click the *Results* node and select *Parameters*.

Define a parameter *xcut* with the value -3.5[mm]. (The microchannel ranges from -3.5 mm to 8 mm in the *x* direction.)

For the *Slice* plot, in the section for *Plane Data*, type *xcut* in the edit field for *X-coordinates*.

The corresponding slice plot is shown here:

What if you would now like to export a series of images corresponding to different values of the slice position? For this purpose, you can use a file-export-based animation.

To generate an animation, select *File* from the *Animation* menu in the ribbon toolbar.

Alternatively, you can right-click the *Export* node under *Results* and select *Animation* > *File*.

In the Settings window of the *Animation* node in the model tree, select *Image sequence* as the *Output* type.

For the *Filename*, type, for example, *C:\COMSOL\my_image.png*. This assumes that you have a folder *C:\COMSOL* in your system, but you can write to any folder where you have write permission.

To link the exported file to the parameter *xcut*, change the *Sequence* type to *Result parameter*. This setting is available in the *Animation Editing* section.

Choose *xcut* as the *Parameter*, with *Start* set to *-3.5*, *Stop* set to *8*, and *Unit* set to *mm*.

At the top of the Settings window for *Animation*, click *Export* to start the generation of images. The images will get a suffix corresponding to the number in the sequence. The number of frames, or images, is set in the *Frames* section.

A series of images is generated with names: my_image01.png, my_image02.png, …, my_image25.png, as shown in the screenshot below.

Let’s now see how the generation of images can be made automatically after solving a model in COMSOL Multiphysics.

To be able to define a sequence of operations under the *Study* node, we enable *Advanced Study Options*. This is an available menu option under the Model Builder toolbar. Click the “eye” symbol to see the menu.

Under the *Job Configuration* node that is now visible, select *Sequence*. This procedure was described in the previous blog post on how to use job sequences.

In the Settings window for *Solution*, select *All*. This ensures that all study steps are run.

Right-click *Sequence* and select *Results* > *Export to File*.

In the *Export to File* Settings window, for the *Run* option, select *Animation 1*. In this simple example with just one node under *Export*, we could here just as well have left the default option *All*.

To solve using the *Sequence*, right-click and select *Run*. Alternatively, click the *Run* button at the top of the Settings window.

The previous export operation generated a series of 3D images. What if you want a series of 2D images for each slice? This can be accomplished by using a parameterized *Cut Plane*.

Right-click the *Data Sets* node and select *Cut Plane*.

In the Settings window of the *Cut Plane*, type *xcut* for the *X-coordinate*.

The already existing 3D plot groups are not useful for 2D plots, so right-click *Results* and select *2D Plot Group*.

In the Settings window for the *2D Plot Group*, select *Cut Plane 1* as the *Data set*.

Add a *Surface* plot node under the *2D Plot Group* and change the *Expression* to *c*, corresponding to the concentration.

To tidy up the list of plot groups, change the name of the *2D Plot Group* to *Cut Plane Concentration*.

Now, go to the *Animation* node in the model tree. In the corresponding Settings window, change the *Subject* to *Cut Plane Concentration*.

Click *Export* to generate the sequence of 2D images, as shown in the file browser view here:

To get this view using Windows® Explorer, change the view to *Large Icons*.

Just like in the previous example, you can now go ahead and run the *Job Sequence* to solve and then have the set of images generated and saved to file — automatically.

To try this example yourself, click on the button above to access the MPH-files.

*Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.*

There are four main ways you can generate the geometry for your simulation in COMSOL Multiphysics:

- Draw the geometry within the COMSOL® software
- Import an external CAD file
- Use one of the LiveLink™ products
- Import mesh data from an external file

Each of these means of geometry creation provides different opportunities and advantages. The first method enables you to generate your geometry using only the COMSOL Multiphysics modeling environment. This method is the focus of today’s blog post, as we will discuss its associated workflow.

The general steps for creating a geometry include:

- Building geometry primitives corresponding to the model’s spatial dimension
- Using geometry operations (such as Boolean, partition, and transformation operations) to manipulate existing geometries to a new one
- Indicating how the software should deal with overlapping objects using
*Form Union*or*Form Assembly*

Sometimes, it can be more efficient to create geometry primitives in lower dimensions using work planes and then extend them into the dimension that was not initially considered. Work planes can also be used to define cross sections from a higher dimensional entity to a lower dimensional workspace.

Let’s now dive further into the details of using geometry primitives, geometry operations, and work plane operations. Note that these operations can be used for geometries native to COMSOL Multiphysics as well as those created through another CAD program.

COMSOL Multiphysics contains a number of ways in which you can generate the objects for your geometry. One option is to choose an object from the selection of built-in geometric shapes in the software, select and add the primitive object to your geometry sequence, and then edit the template provided through the Settings window. This enables you to specify the exact position, angle, and dimensions of the object as well as to quickly make changes to any of those settings, if needed. Once in the sequence, the object can then be combined and manipulated with other primitive objects to form your final geometry.

*Creating and modifying a rectangle using the Settings window in COMSOL Multiphysics.*

The types of objects available for you to choose from depend on the spatial dimension of your component. This includes geometric primitives for conventional shapes as well as other less traditional shapes. For 3D models, you can add objects like blocks or spheres as well as torus or helix objects. Similarly, for 2D models, primitives such as rectangles, circles, Bézier polygons, and parametric curves are available.

Another option for generating objects in your geometry sequence, available for 2D and 1D models, is to sketch geometric primitive objects with the mouse.

This is done by:

- Clicking on the respective button for the object you want to build
- Using the mouse in the
*Graphics*window to click and define the center or corner of the object - Dragging the mouse to generate the desired size and then clicking again

Immediately afterward, the object you have outlined appears and is added to your geometry sequence.

*Creating and modifying a rectangle using the geometry drawing tools.*

For 3D models, although you cannot sketch a geometry primitive with the mouse, you can draw a cross section of it in a work plane, which can then be expanded into a 3D object. We demonstrate both options mentioned above in dedicated chapters within the building 2D geometries part of the video series. Additionally, we discuss the advantages of using parameters during this process and demonstrate how they aid in streamlining your geometry setup.

After generating a few objects in your geometry sequence, you can start to combine them in meaningful ways using geometry operations. In the video chapters on building 2D geometries and 3D geometries in our series, we build a rectangular plate containing slots and a grille, respectively. This was done by using a combination of several Boolean and transformation operations.

*The rectangular plate with slots (left) and with a grille (right).*

The Boolean operations used to create the geometries pictured above include the *Union*, *Intersection*, and *Difference* operations, which enable you to combine objects, create a new object from the intersection of other objects, and subtract objects from one another, respectively. Likewise, the transformation operations used include the *Move*, *Copy*, *Mirror*, and *Array* operations, which enable you to change the position of objects; create duplicate objects; reflect objects over a plane, line, or point; and create an arrangement of duplicates of another object.

Aside from some of the more conventional geometry features used above, there are also specialized geometry tools used to help create certain types of geometries. Partition operations enable you to split geometric entities such as objects, domains, boundaries, and edges so that you can separate, remove, or simplify the geometry in your model. When we discuss using partition operations for geometries in the video, we demonstrate how to perform this on a helix object as well as the geometry for the shell and tube heat exchanger tutorial model.

*A helix geometry split down the middle, from our video chapter on partitioning geometries.*

As you continue to build upon your geometry (adding more of these operations and other primitives to your sequence), you’ll notice that your sequence can become quite complex and that making any changes thereafter can become cumbersome. Changing the size of one object in the geometry may require other objects to be resized to accommodate that change. For these and other reasons, we encourage the use of parameters in the geometry operations you use in your sequence. We discuss the reasons for this and demonstrate how with a few of the example geometries built throughout the video series.

COMSOL Multiphysics contains several tools known as work plane operations, which can be used to convert a 2D geometry in a work plane into a 3D object. In the video series, we show and demonstrate the *Extrude*, *Revolve*, and *Sweep* operations.

The *Extrude* operation enables you to extrude objects from a work plane or planar face to create 3D objects.

*The* Extrude *operation, converting a rectangular plate with holes into a 3D block containing slots. The blue arrow in the center represents the direction in which the shape is extended, which is perpendicular to the work plane.*

With the *Revolve* operation, you can revolve objects from a work plane or planar face about an axis to create 3D objects.

*The* Revolve *operation, converting a circle into a torus. The blue arrows represent the axis about which the shape is revolved.*

Finally, there’s the *Sweep* operation, which enables you to sweep objects from a work plane or planar face along a path to create a 3D object.

*The* Sweep *operation, converting a circle and 2D line path into a pipeline. Two work planes that are perpendicular to each other are used to define the shape and line path separately.*

Using these work plane operations (starting from a 2D model and then expanding it into 3D) can be a significantly quicker alternative for building your 3D geometry, as opposed to building it entirely of 3D objects.

The COMSOL Multiphysics software also contains a tool for converting a 3D geometry into a 2D geometry. This is done through using a work plane along with the *Cross Section* geometry operation. The functionality can be used to simplify your model, among other things, which we discuss in the video series.

*The axisymmetric cross section of a light bulb, built in the video chapter on creating 2D models from 3D geometries.*

The geometry generated through the *Cross Section* operation is based on the intersection of your 3D geometry with a work plane. Thus, the 2D geometry you obtain is the result of wherever the work plane cuts through the 3D solids in your model. Within the operation, you can choose for the work plane to intersect (and thus include) all objects or a selection of objects that you specify.

To obtain the appropriate cross section for your analysis, using this functionality may require performing some additional preparation on the original 3D geometry. Sometimes, this means separating and then removing certain parts of your geometry, wherein partition operations can be helpful. We elaborate on this further and demonstrate it in a dedicated chapter within the video series.

Whether you are building a geometry entirely within COMSOL Multiphysics or working off of an external file, you can use the geometry functionality discussed in this blog post to completely customize the composition of your geometry objects. If you are interested in seeing these tools in action, watch our introductory geometry video series:

]]>

We often make modeling decisions based on partial information. Does the flow stay laminar or does it become turbulent? Does a solid stay elastic or does it yield to become plastic under the specified loads and constraints? Are deformations large enough to require a geometrically nonlinear analysis or is small deformation theory good enough? Sometimes, a limit analysis can answer these questions. If we can answer such questions before solving the problem, we can pick the appropriate model. If not, it is economical to solve the simpler model and switch to the more complicated model only if the solution is not valid.

For example, we can do elastic analysis first and switch to elastoplastic analysis only if the maximum stress obtained by the first analysis is above the elastic limit. Similarly, we can solve assuming laminar flow and include turbulence in our model only if the Reynolds number obtained from the first analysis is high enough.

In these and other situations, where we may have to change our modeling approach based on preliminary results, it would save our efforts to automate the workflow. Information obtained from the first study can also be reused to make subsequent studies computationally efficient. Today, we will demonstrate how to do so by writing code to automate an elastoplastic analysis. The code runs an elastic analysis, checks if the maximum stress exceeds the yield stress of the material, and runs the model with plasticity if necessary.

Using the COMSOL Multiphysics® software on the Windows® operating system platform, you can build an app with a customized user interface and include methods for additional functionality. Apps built using the Application Builder in Windows® can then be run on any operating system. You can share the apps with colleagues, customers, students, and more.

In version 5.3 of the COMSOL® software, we introduced a new feature, called a model method, that lets you extend the functionality of the software by writing code inside the COMSOL Multiphysics graphical user interface (GUI), even when you do not intend to make apps. The *Record Code*, *Editor Tools*, *Language Elements*, and *Model Expression* features of the Application Builder can be used to easily generate the Java® code needed to write a model method.

*To add a model method, go to the *Developer* tab and click *Model Method*.*

*To run a model method, go to the* Developer *tab, click* Run Model Method*, and choose a method.*

In previous blog posts, we discussed how model methods work and demonstrated how to use them to create randomized geometry. Today, we will extend that conversation to physics and study settings.

To demonstrate how to use model methods in physics selection and study sequences, we will use a model based on an elastoplastic analysis of a holed plate from our Application Gallery. The Application Gallery example was set up with prior knowledge that the stress will exceed the elastic limit. As such, plasticity was added in the analysis. Today, we will have a model method find that out and, if plasticity is necessary, incorporate that automatically.

The procedure we want to automate here contains the following steps:

- Run an elastic study without plasticity.
- Check if the maximum stress exceeds the initial yield stress of the material.
- If the maximum stress is less than the initial yield stress, stop. Otherwise, run the elastoplastic study.
- If plasticity is necessary, reuse information from the elastic study to avoid unnecessary nonlinear analysis in the elastic range.

We add two studies and disable plasticity in the first study. In the second study, we add an *Auxiliary Sweep* for load ramping in the elastoplastic analysis. In the first study, the full load is applied by setting *para* to 1 in *Global Definitions*. In the second study, we use the parameters *p_low* and *p_next* to make the study efficient. These parameters are going to be set based on the results of the first study.

*The second study will be computed only if the assumptions in the first study turn out to be incorrect.*

In the *Results* section, we add a *Derived Values* node to evaluate the maximum stress from the first study. This could alternatively be done using the *Maximum Component Coupling* operator. This value, obtained in pascals (as shown in the Settings window for *Surface Maximum 1*) will be compared to the initial yield stress in pascals. To this end, we introduce the parameter *SY_scaled*.

*SYield and SY_scaled are for the* Materials *node and the model method, respectively.*

*A* Maximum Component Coupling *operator is another alternative for this operation.*

Now that we have all of the ingredients we need, let’s write the model method.

*A model method that is used to automate an efficient elastoplastic analysis.*

Two of the above lines warrant some discussion:

- Line 12:
- The first study is done using a load parameter of 1. If
`StressRatio`

in line 9 is greater than 1, its reciprocal will tell us the load parameter at the elastic limit. Note that we could do so here, as plasticity is the only possible nonlinearity in our problem. The model has no geometric nonlinearity or contact.

- The first study is done using a load parameter of 1. If
- Line 14:
- If plasticity is needed, we want to do the auxiliary sweep in steps of 0.05. We use the
*ceil*(ceiling) method from the Java® Math class to obtain the lowest value of the load parameter that is a multiple of 0.05 and is after the elastic limit.

- If plasticity is needed, we want to do the auxiliary sweep in steps of 0.05. We use the

With this information, the second study (if necessary) is solved only once in the elastic region: at the elastic limit. We can see what values of the load parameter are used in the *Results* section.

*The lowest value of the load parameter, highlighted, is estimated using the elastic study.*

If you go back to *Global Definitions*, you will see that the model method has updated the parameters *p_low* and *p_next* from their original values of zero, shown earlier.

*Parameter values changed by a model method based on results from a preliminary study.*

Today, we have demonstrated setting up efficient physics choices and study sequences using methods. Similar tasks can be accomplished by scripting. However, model methods make it easy to grab the model objects and methods needed by using the same functionality employed in the Application Builder. When needed, these methods can be augmented by regular Java® classes, such as the Math class we used in the first example, or your own classes.

We have only shown one way of performing tasks to illustrate the use of model methods in physics and solver settings, but there are alternatives and refinements. For example, in the elastoplasticity analysis, we added two studies in the Model Builder. Alternatively, you can use a single study where the plasticity and auxiliary sweep features can be enabled or disabled from a model method.

In the examples above, there are logical decisions that have to be made between studies. When there are no such decisions and you just want to refer to one study from another (say, to use one study as a precomputing step for a subsequent study), you can use the *Study Reference* feature. See the section *Study Reference* in the *COMSOL Multiphysics Reference Manual* for details.

If you have any questions related to today’s discussion or using COMSOL Multiphysics, contact us via the button below.

- Learn how model methods work in general and how to access built-in methods and objects
- Read about using model methods in preprocessing to create a randomized geometry
- Find out how to use a model method in postprocessing to listen to the noise generated by a vibrating gearbox
- See a COMSOL app for stress analysis of a pressure vessel, where a similar strategy has been used in elastoplastic analysis

*Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Oracle and Java are registered trademarks of Oracle and/or its affiliates.*

As we saw in a previous blog post on creating randomized geometries, you can use the *Record Method* functionality to record a series of operations that you’re performing within the COMSOL Multiphysics graphical user interface (GUI) and then replay that method to reproduce those same steps. Of course, this doesn’t do us any good if we have already created the file — we don’t want to go back and rerecord the entire file. As it turns out, though, COMSOL Multiphysics automatically keeps a history of everything that you’ve done in a model file as Java® code. We can just extract the relevant operations directly from this code and insert them into a new model method.

*The* Compact History *option.*

To extract all of the history of all operations within a file, there are a few steps you need to take. First, go to the *File* menu and choose the *Compact History* option. We do this because COMSOL Multiphysics keeps a history of all commands, but we only want the minimum set of commands that were used to generate the existing model. Next, go to *File Menu* > *Save As* and save to a *Model File for Java* file type. You now have a text file that contains Java® code. Try this out yourself and open the resulting file in a text editor. This file always has lines of code at the beginning and end that are similar to what is shown below:

/*example_model.java */ import com.comsol.model.*; import com.comsol.model.util.*; public class example_model { public static Model run() { Model model = ModelUtil.create("Model"); model.modelPath("C:\\Temp"); model.label("example_model.mph"); model.comments("This is an example model"); ... ... /* Lines of code describing the model contents */ ... return model; } public static void main(String[] args) { run();} }

The above code snippet shows us what we can remove. Only the code between the `Model model = ModelUtil.create("Model");`

and `return model;`

is used to define all of the features within the model. In fact, we can also remove the `model.modelPath();`

, `model.label();`

, and ` model.comments();`

lines. Go ahead and remove all of these lines of code in your text editor and you are left with just the set of commands needed to reproduce the model in a model method.

Next, open a new blank model file, go to the Application Builder, and create a new model method. Copy all of the lines from your edited Java® file into this new model method. Then, switch back to the Model Builder, go to the *Developer* tab, and choose *Run Model Method* to run this code. Running this model method reproduces all of the steps from your original file, including solving the model. Solving the model may take a long time, so we often want to trim our model method.

*A model method within the Application Builder.*

There are two approaches that you can take for trimming down the code. The first is to manually edit the Java® code itself, pruning out any code that you don’t want to rerun. It’s helpful to have the *COMSOL Programming Reference Manual* handy if you’re going to do this, because you may need to know what every line does before you delete it. The second, simpler approach is to delete the features directly within the COMSOL Multiphysics GUI. Start with a copy of your original model file and delete everything that you don’t want to appear within the method. You can delete the geometry sequence, mesh, study steps, results visualizations, and anything else that you don’t want to reproduce.

Let’s take a look at a quick example of this. Suppose that you’ve built a model that simulates thermal curing and you want to include this thermal curing simulation in other existing models that already have the heat transfer simulations set up.

As we saw in a previous blog post, modeling thermal curing in addition to heat transfer requires three steps:

- Defining a set of material parameters
- Adding a
*Domain ODE*interface to model the evolution of the cure over time - Coupling the heat of reaction from the curing into the thermal problem

We can build a model in the GUI that contains just these steps and then write out the Java® file. Of course, we still need to do some manual editing, and it’s also helpful to go through the *Application Programming Guide* to get an introduction to the basics. But once you’re comfortable with all of the syntax, you’ll see that the above three steps within the GUI can be written in the model method shown here:

model.param().set("H_r", "500[kJ/kg]", "Total Heat of Reaction"); model.param().set("A", "200e3[1/s]", "Frequency Factor"); model.param().set("E_a", "150[kJ/mol]", "Activation Energy"); model.param().set("n", "1.4", "Order of Reaction"); model.component("comp1").physics("ht").create("hsNEW", "HeatSource"); model.component("comp1").physics("ht").feature("hsNEW").selection().all(); model.component("comp1").physics("ht").feature("hsNEW").set("Q0", "-ht.rho*H_r*d(alpha,t)"); model.component("comp1").physics().create("dode", "DomainODE", "geom1"); model.component("comp1").physics("dode").field("dimensionless").field("alpha"); model.component("comp1").physics("dode").field("dimensionless").component(new String[]{"alpha"}); model.component("comp1").physics("dode").prop("Units").set("SourceTermQuantity", "frequency"); model.component("comp1").physics("dode").feature("dode1").set("f", "A*exp(-E_a/R_const/T)*(1-alpha)^n");

The first four lines of this code snippet define an additional set of global parameters. The next three lines add a *Heat Source* domain feature to an existing *Heat Transfer* interface (with the tag *ht*), define the heat source term, and apply the heat source to all domains. The last five lines set up a *Domain ODE* interface that is applied by default to all domains in the model and sets the variable name, the units, as well as the equation to solve.

*Running the model method from the* Developer *tab.*

We can run the above model method in a file that already has a heat transfer analysis set up. For example, try adding and running this model method in the Axisymmetric Transient Heat Transfer tutorial, available in the Application Library in COMSOL Multiphysics. Then, just re-solve the model to solve for both temperature and degree of cure.

Now, there are a few assumptions in the above code snippet:

- We want to model curing in all of the domains in our model
- There is already a component with the tag
*comp1*to which we can add physics interfaces - There is not already a
*Domain ODE*interface with the tag*dode*in that component - The temperature variable is defined as
*T*, which we can use in the*Domain ODE*interface - A heat transfer physics interface with tag
*ht*already exists and that we can add a feature with tag*hsNEW*to it

Of course, as you develop your own model methods, you need to be able to recognize and address these kinds of general logical issues.

From this simple example, you can also see that you can create a model method that acts as a reusable template for any part of the modeling process in COMSOL Multiphysics. You might want to run such a template model method in every new file you create, possibly to load in a set of custom material properties, set up a complicated physics interface, or define a complicated set of expressions. You might also want to reuse the same model method in an existing file to set up a particular customized study type, modify solver settings, or define a results visualization that you plan to reuse over and over again.

Once you get comfortable with the basics of this workflow, you’ll find yourself saving lots of time, which we hope you’ll appreciate!

- How to Generate Random Surfaces in COMSOL Multiphysics®
- How to Model Gearbox Vibration and Noise in COMSOL Multiphysics®
- How to Create a Randomized Geometry Using Model Methods

*Oracle and Java are registered trademarks of Oracle and/or its affiliates.*

To demonstrate this functionality, we will first load the Micromixer tutorial model from the Application Libraries. This model is available in the folder *COMSOL Multiphysics* > *Fluid Dynamics* and illustrates fluid flow and mass transport in a laminar static mixer.

The model performs a fluid flow simulation using a *Laminar Flow* interface. In the next step, it shows how to calculate the mixing efficiency by means of a *Transport of Diluted Species* interface, using the results from the fluid flow simulation as input. The species will be transported downstream based on the fluid velocity.

The computation time for this model is a few minutes. To simplify the model a bit so that we can run the computation quicker, we won’t solve for the species transport. To achieve this, we will make one modification to the Settings window of the second study step, *Step 2: Stationary 2*, by clearing the *Transport of Diluted Species* check box.

We can make an additional change to the model in order for it to run faster. Set the *Sequence type* for the mesh to *Physics-controlled mesh* and the element size to *Extremely coarse*.

Now, we can compute *Study 1* to make sure everything works. The resulting plot shows the velocity magnitude at a few slices along the mixer geometry.

Here, we will focus our attention on one important part of a job configuration: the *Sequence* option.

To be able to define a sequence of operations under the *Study* node, we enable *Advanced Study Options*. This is an available menu option under the Model Builder toolbar. Click the “eye” symbol to see the menu.

Enabling this setting reveals a hidden *Job Configurations* node in the model tree. This node is something that you don’t need to worry about during conventional modeling work. It essentially stores low-level information pertaining to the order in which the solution process should be run. Normally, this is controlled indirectly from the top level of a study without the need for enabling *Advanced Study Options*.

Right-click *Job Configurations* and select *Sequence*.

Next, right-click *Sequence* to see, below the *Run* option, a variety of options that can be added as an ordered sequence of operations performed when running the sequence:

- Job
- Solution
- Other
- Save Model to File
- Results

*Job* refers to another sequence that is to be run from this sequence, while *Solution* runs a *Solution* node as available under the *Solver Configurations* node, available further up in the *Study* tree.

Under *Other*, you can choose *External Class*, which calls an external Java® class file. Another option, *Geometry*, builds the *Geometry* node. This can be used, for example, in combination with a parametric sweep to generate a sequence of MPH-files with different geometry parameters. The *Mesh* option builds the *Mesh* node.

*Save Model to File* saves the solved model to an MPH-file.

Under the *Results* option, you can choose *Plot Group* to run all or a selected set of plot groups. This is useful to automate the generation of plot groups after solving. You also don’t have to manually click through all of the plot groups to generate the corresponding visualizations. The *Derived Value* option is there for legacy reasons and we recommend that you use the *Evaluate Derived Values* option, which will evaluate nodes under *Results* > *Derived Values*. The option *Export to File* runs any node for data export under the *Export* node.

Let’s now create a simple sequence. Right-click the *Sequence* node and select *Solution*.

The default option for a *Solution* node in a sequence is to run all solution nodes. The *Run* option in the *General* section lets you specify which *Solution* data structures should be computed. The *Solution* data structures are available as child nodes, together with other nodes, under *Solver configurations*. They can be recognized by their short name written within parentheses, such as *(sol1)* and *(sol2)*. The solution data structures are low-level representations of the solutions.

In this example, you can keep the default *All* for the *Solution* data structures.

We would like to save the file when the solver is finished. Right-click the *Sequence* node and select *Save Model to File*.

In the Settings window, you can see a number of options that are related to the capability of saving a series of MPH-files with parameters added at the end of the file name. This is very useful for parametric sweeps such as batch sweeps. However, we will not need to do this in such a simple example, so we change the option *Add parameters to filename* to *None*. At this stage, we also need to give a file name to a location where we have permission to write. In this example, the file name and path is *C:\COMSOL\myfile.mph*.

To run these operations, select the *Sequence* node and click *Run*.

The library model that we started from already has one defined derived value. You can see this under *Results* > *Derived Values* > *Global Evaluation*. The variable is called *S_outlet* and is the relative concentration variance at the outlet. It is defined as a variable under *Component* > *Definitions* > *Variables*.

The value of *S_outlet* is sent to *Table 1*. We can choose to store this value on file by changing a setting in the Settings window of *Table 1*. Change *Store table* to *On file* and type a file name; for example, *C:\COMSOL\my_data.txt*.

Now, add an *Evaluate Derived Values* operation to the sequence.

In the *General* section, you can change the *Evaluate* setting to *Global Evaluation 1*. However, in this simple example model, you can omit this step. Note that the name of the node in the model tree changes to *Evaluate: Global Evaluation 1*.

You can now run the sequence again. However, for this last step to make sense, you need to enable the *Transport of Diluted Species* interface in the Settings window for *Step 2: Stationary 2*.

If you want to run a job sequence from the command line in the Windows® or Linux® operating systems, or macOS, you cannot use the method shown above, but instead you need to add a parametric sweep with a dummy parameter. However, if you were already running a parametric sweep, then all you need to know is that a parametric sweep is just a special type of job sequence and then follow the instructions above, but with a *Job Configurations>Parametric Sweep* node replacing a *Job Configurations>Sequence* node.

The reason for this is historical and reflects the evolution of the *Study* node functionality over time. The operating system command interface doesn’t let you run any part of a *Study* node that is not controlled at the top level of the *Study* node. You can only specify which study to run, for example, in the Linux® operating system:

comsol batch -inputfile mymodel.mph -study std1

for *Study 1* with tag *std1*.

You cannot run a sequence in this way, since the top-level study step is unaware of your edits under the *Job Configuration* node. To make the study step at the top of the *Study* node tree “aware” of your edits under the *Job Configurations* node, the easiest way is to add a parametric sweep with an arbitrary parameter defined under *Global Definitions* > *Parameters*; say, *dummy* with value 1. Sweeping over this parameter then adds the extra overhead needed to get a handle on the *Job Configuration* node from the top level of the *Study* node. Then, you can issue a command-line batch command to run it.

This is how the corresponding “dummy” sweep will look:

The following figure shows the corresponding sweep over one parameter value for the dummy parameter.

Now, knowing that the *Parametric Sweep 1* node is just a special type of *Sequence* node, the child nodes *Solution 1*, *Save Model to File 1*, and *Evaluate: Global Evaluation 1* are just as they are in the example above using *Sequence*.

Enable the display of model tree tags by selecting *Tag* from the *Model Tree Node Text* menu, available in the Model Builder toolbar.

The study tag *std1* is now visible in the model tree:

The Linux® command shown earlier will now run the sequence of operations that solves, saves the model to file, and finally evaluates the *Global Evaluation* node. Note that if you only have one *Study* node in your model, then you can skip the input argument *study std1*.

Job sequences can be used to automate a number of common tasks after solving a model. In this blog post, we have seen examples of:

- Saving the model to file as an MPH-file after solving
- Exporting
*Derived Values*to file automatically after solving

There are other tasks that use job sequences that you can try on your own, including:

- Regenerating all plots after solving
- Exporting plot data to file
- Exporting image data to file

We hope you find that job sequences are a useful feature for your everyday modeling work!

*Oracle and Java are registered trademarks of Oracle and/or its affiliates. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds in the U.S. and other countries. macOS is a trademark of Apple Inc., registered in the U.S. and other countries.*

We have already discussed how to generate random-looking geometric surfaces by using *sum* and *if* operators in combination with uniform and Gaussian random distribution functions. The idea is that by summing up a set of spatially varying waves with careful choices of amplitudes and phase angles, we can mimic the type of randomness frequently found in natural materials and many natural phenomena in general.

To generate synthetic material data in 2D, we can use the same double-sum expression that we used to create randomized CAD surface data in the previous blog post:

f(x,y)=\sum_{k=-K}^{K} \sum_{l=-L}^{L} a(k,l) cos(2 \pi(kx+ly)+\phi(k,l))

Note that this sum could also be used to generate random data for use in boundary conditions on surfaces in a 3D model.

For the 3D volume case, we will need triple sums:

f(x,y,z)=\sum_{k=-K}^{K} \sum_{l=-L}^{L} \sum_{m=-M}^{M} a(k,l,m) cos(2 \pi(kx+ly+mz)+\phi(k,l,m))

The frequency-dependent amplitudes will take their values from a random distribution according to

a(k,l) = g(k,l) h(k,l)=\frac{g(k,l)}{\vert k^2+l^2\vert^{\beta}}=\frac{g(k,l)}{(k^2+l^2)^{\frac{\beta}{2}}}

and

a(k,l,m) = g(k,l,m) h(k,l,m)=\frac{g(k,l,m)}{\vert k^2+l^2+m^2 \vert^{\beta}}=\frac{g(k,l,m)}{(k^2+l^2+m^2)^{\frac{\beta}{2}}}

for the 2D and 3D cases, respectively.

The functions *g*(*k*,*l*) and *g*(*k*,*l*,*m*) have a random Gaussian, or normal, distribution and *h*(*k*,*l*) and *h*(*k*,*l*,*m*) are frequency-dependent amplitude functions with values that taper off for higher frequencies in accordance with the spectral exponent *β*. The higher the value of the spectral exponent, the smoother the generated data will be. For a variety of reasons, many natural processes have this property, which is characterized by slower variations that are more dominant than fast.

The phase angles *φ* are sampled from a function *u*. The function has a uniform random distribution that is between and :

*φ*(*k*,*l*) = *u*(*k*,*l*) and *φ*(*k*,*l*,*m*) = *u*(*k*,*l*,*m*)

The sums run over a set of discrete frequencies. More nonzero terms in a sum imply a larger number of higher-frequency contributions, resulting in data that contains finer details. The maximum frequencies in each direction are determined by the integers *K*, *L*, and *M*, respectively. These types of sums are closely linked to discrete inverse cosine transforms. They essentially correspond to an inverse cosine transform of the amplitude coefficients *g*(*k*,*l*) and *g*(*k*,*l*,*m*), with some additional manipulations of the phase angles. For details, see the previous blog post on how to generate random surfaces in COMSOL Multiphysics.

In COMSOL Multiphysics, the following double-sum expression can be entered in various edit fields, such as 2D material properties or 3D boundary conditions:

0.01*sum(sum(if((k!=0)||(l!=0),((k^2+l^2)^(-b/2))*g1(k,l)*cos(2*pi*(k*s1+l*s2)+u1(k,l)),0),k,-N,N),l,-N,N)

We can use a similar expression for the triple sum, which can be used for 3D material data, loads, sources, and sinks:

0.01*sum(sum(sum(if((k!=0)||(l!=0)||(m!=0),((k^2+l^2+m^2)^(-b/2))*g1(k,l,m)*cos(2*pi*(k*x+l*y+m*z)+u1(k,l,m)),0),k,-N,N),l,-N,N),m,-N,N)

where we have set *K* = *L* = *M* = *N*.

For more details about the underlying theory and syntax used here, see the blog post mentioned above.

Working with models that contain triple sums is computationally quite expensive. It is more efficient to first generate the data and export it to file and then import it again as an interpolation function, perhaps in a separate model. This interpolation function can then be used in a variety of ways, as we will explain later. Alternatively, you can use external software to generate the data by means of inverse FFT.

Let’s now take a look at how to generate 3D material data.

Creating a 3D volume matrix of random data is surprisingly easy. It amounts to creating a couple of random functions, some parameters, a *Grid* data set, and an *Export* node.

Start by creating a random function for the amplitudes with 3 input arguments based on a normal, or Gaussian, distribution. This corresponds to the function *g*(*k*,*l*,*m*) in the mathematical description above. In this case, we arbitrarily use the default settings for a random function with the mean value set to 0 and the standard deviation set to 1.

Next, we create a random function for the phase angles with 3 input arguments based on a uniform distribution between and corresponding to the function *u*(*k*,*l*,*m*) above.

Now, create a data set of the type *Grid 3D* that references the random functions as a source. We need this data set to give an evaluation context to the triple-sum expression that we will define later in the *Export* node.

We will use two results parameters, N and b, for the spatial frequency resolution and spectral exponent, respectively.

To make it easier to work with the large data sets that are generated, you can turn off the *Automatic update of plots* option. This setting is available in the Settings window of the *Results* node. Turning it off avoids recomputing the expressions each time you click on a plot group under *Results*.

To visualize the data before exporting to file, add a *Slice* plot and type (or paste) the expression:

0.01*sum(sum(sum(if((k!=0)||(l!=0)||(m!=0),((k^2+l^2+m^2)^(-b/2))*g1(k,l,m)*cos(2*pi*(k*x+l*y+m*z)+u1(k,l,m)),0),k,-N,N),l,-N,N),m,-N,N)

To export the data, add a *Data* node under *Export* and type in the same expression as for the *Slice* plot above. In the Settings window of the *Data* node, make sure to set the data set to *Grid 3D* and to specify a file name that the data will be written to. Here, we can let the points be evaluated in a way that is independent of the *Grid 3D* data set.

For the setting *Points to evaluate in*, select *Regular grid*. For *Data format*, select *Grid*. You can freely choose the number of *x*, *y*, and *z* points to evaluate in. In the figure below, these points have each been set to *50*. Note that data generation corresponding to numbers higher than *50* may take a very long time to generate. For a 50x50x50 grid, we already get 125,000 data points.

The text file that is generated and exported can now be imported to a new file for the purpose of setting up a physics analysis where we use the generated data in material properties. This can be done for any type of physics, including electromagnetics, structural mechanics, acoustics, CFD, heat transfer, and chemical analysis. By using the COMSOL® API in model methods or applications, for example, such export and import operations can be automated and set in the context of a for-loop in order to generate statistics over a larger sample set. In this example, we only generate one set of data.

To illustrate how this type of data can be used, let’s create a test model of the simplest possible kind, based on a heat transfer analysis.

Start by creating a new 3D model using a *Heat Transfer in Solids* interface.

Now, import the data from file as an *Interpolation* function. This function will be available under the *Global Definitions* node.

The *Interpolation* function is given the name *cloud* and can later be accessed using expressions like *cloud(x,y,z)*.

To make unit handling easy, when using this interpolation function, we will set the input argument units to *m* and the function unit to *1*. The *Function unit* corresponds to the unit of *f(x,y,z)=cloud(x,y,z)* and setting it to *1* makes it dimensionless.

To keep things simple, let’s use a *Block* geometry object that matches the imported data exactly, with the corner at the origin and sides at 1. This corresponds to the size and position of the *Grid 3D* data set used earlier for generating the data.

For a “real” case, you can instead import or create a CAD geometry, which can be used to truncate the interpolation function in a suitable way. This truncation of data is automatic in COMSOL Multiphysics. The figure below shows such an interpolation of randomized data over a CAD model of a wrench. When evaluating over an arbitrary geometry, it can be useful to scale the coordinate values in the triple-sum. In the wrench example, instead of k*x+l*y+m*z, as in the expressions above, the scaled expression k*(x/0.05)+l*(y/0.05)+m*(z/0.05) is used.

This type of irregular material data may have uses in statistical modeling of materials such as those found in additive manufacturing, where perfect material homogeneity of a 3D-printed component may not always be possible to achieve. The data can be used for any type of material property, such as conductivity, permeability, and elasticity properties, to name a few.

Getting back to our unit cube example, we now add a *Blank Material* node. We will, somewhat arbitrarily, set the *Density* to 2000 kg/m^{3} and the *Heat capacity* to 1 J/kg/K. Since we are performing a stationary analysis, the *Heat capacity* is irrelevant. The *Thermal conductivity* is set to the expression *1+2[W/m/K]*cloud(x,y,z)*. We can see from the earlier *Slice* plot visualization that the values for the interpolation table are roughly between *-0.2* and *0.2*. This means that this expression will generate an interesting spatial distribution of thermal conductivity values between about 0.6 and 1.4.

The coefficient 2[W/m/K] is used to assign a consistent unit to the expression. The constant 1 will be automatically converted to the correct unit: [W/m/K].

Let’s define some simple boundary conditions. Set the temperature at the top surface to 393.15[K] and the bottom surface to 293.15[K], corresponding to a 100-K temperature difference.

Now, let’s generate a default mesh.

COMSOL Multiphysics will automatically interpolate values such as those from the material properties to this unstructured mesh from the imported interpolation function. Alternatively, we could generate a swept mesh with hexahedral elements of the same size as the original data, 50x50x50. Such a representation would be more “true” to the original data.

You can experiment with different element orders, such as linear and quadratic types. Unless you use a very fine mesh that “oversamples” the data, the results will depend somewhat on the element order.

Running the *Study* will produce a couple of temperature plots, the second of which is an *Isosurface* plot.

Notice how the *Isosurface* plot looks a little bit jagged, which is due to the underlying irregularity of the material data. We can create another *Slice* plot to yet again visualize the data. This time, we do so under the guise of thermal conductivity by using the variable *ht.kmean*, which equals the expression 1+2[W/m/K]*cloud(x,y,z) defined earlier.

Here, the data is sampled at a lower sampling density than the original interpolation function, since we used the default mesh with the *Element size* set to the default *Normal* setting. Successively refining this unstructured mesh will ultimately sample the data at more or less the same level of detail as the original synthesized data.

As mentioned earlier, the approach used here for heat transfer is applicable to virtually any other type of simulation. For example, in a porous media flow simulation, the randomized quantity would be permeability rather than thermal conductivity. In the case of porous media flow, a more advanced random distribution may be needed, but let’s save that discussion for a future blog post.

We can also use the synthesized data in a different way: by using Boolean expressions to convert it to binary data. This method can be used for simulating two or more materials where the material interface is randomized and the material properties change abruptly from one point to another. COMSOL Multiphysics will automatically handle the sharp interpolations needed for this case.

The following picture shows a visualization of the Boolean expression *cloud(x,y,z)>-0.03*, which evaluates to 1 at points where the inequality is true and 0 at the other points.

To get a nicer plot, you can set the resolution of the *Slice* plot to *Extra fine*. This setting is available in the *Quality* section of the Settings window for the *Slice* plot.

We would now like to use this type of binary information in a simulation. It can be interesting, for example, to use it in a heat transfer simulation to see the so-called *percolation* effects. For certain threshold values, you get a large connected component in the material so that the entire slab of material starts conducting much more efficiently.

To try this, change the expression for the thermal conductivity to 1-0.9[W/m/K]*(cloud(x,y,z)>thold), where *thold* is a global parameter. Start by defining *thold* in *Parameters* under *Global Definitions*.

Then, change the material data accordingly.

For each point in space, the *Thermal conductivity* will, in a binary fashion, evaluate to 1 or 0.1, depending on the value of the inequality.

Now, let’s see how different values of this Boolean threshold will affect the simulation. For this purpose, run a parametric sweep over the parameter *thold* from -0.2 to 0.2.

Add a *Surface Integration* node under *Derived Values* to integrate the total heat flux that goes through one of the surfaces. This will be given by the surface integral of -ht.ntflux or +ht.ntflux, depending on if you are integrating over the top or bottom surface. In the figure below, we used the top surface.

The resulting *Table* plot shows the amount of heat power transferred (in watts). We can see that for threshold values around 0, the conductivity rises quickly from a low value to a high value. This is due to the sudden appearance of one or more large connected components where the expression 1-0.9[W/m/K]*(cloud(x,y,z)>thold) evaluates to 1.

The figures below show a *Volume* plot with a *Filter* attribute for three threshold values around 0. The filter shows the parts of the domain for which *cloud(x,y,z)<**thold* corresponds to the locations of higher conductivity.

We can see from these figures how the highly conductive parts start connecting for the higher threshold values.

The corresponding *Filter* settings are shown in the figure below.

A similar type of percolation effect, seen here for binary data, is also happening for the continuous data case shown earlier. However, when using binary data, the effects are easier to see.

Finally, let’s look at an alternative way of visualizing this type of random data. We will visualize the data set using a large number of randomized points (or rather, small spheres) and let the radius and color of the points vary according to the interpolation function *cloud(x,y,z)*. In addition, we will only allow the points to be visualized for positive values of cloud(x,y,z). This technique will allow us to “see inside the data” in a way that is difficult to achieve using other methods. Note that this visualization technique works for any type of data, including real measured data.

Start by generating three random variables with uniform distribution, with the *Range* set to 1 and the *Mean* value set to 0.5.

To generate this type of plot, we use a *Scatter Volume* plot type. This is available by right-clicking a *3D Plot Group* and selecting *More Plots* > *Scatter Volume*.

In the Settings window of the *Scatter Volume* plot, set the expression for the *X*-, *Y*-, and *Z*-components: rn1(x), rn2(x), and rn3(x), respectively. Here, we are using the *x*-coordinate in an unusual way, in that we are using it merely as a long vector of arbitrary values.

Next, in the *Evaluation Points* section, set the *Number of points* for the *X grid points* to 100,000; 1,000,000; or more, depending on how many points your computer can handle. Set each of the *Y* grid point and *Z* grid point values to 1. This is a trick for getting a long vector of values that we can feed into the random functions in order to generate a lot of random points within the unit cube.

To make the plot appear as in the above figure, go to the *Radius* section and set *Expression* to *cloud(rn1(x),rn2(x),rn3(x))* and the *Radius scale factor* to *0.3*. In addition, in the *Color* section, set the *Expression* to *cloud(rn1(x),rn2(x),rn3(x))* and the *Color table* to *GrayScale*.

One additional noteworthy fact about this plot: negative values will be ignored. This helps our visualization, since roughly half of the generated data is negative and we can more easily see through the data and get an intuitive feel for the variations. This method will only work for a rectangular block. To instead generate this type of plot over an arbitrary CAD geometry, you can use the Particle Tracing Module, which allows you to generate random points inside any type of CAD model.

By the way, a similar-looking plot can be achieved in a 2D model by simply creating a *2D Surface* plot using a double-sum expression, as shown in the figure above.

Determining the best cheese in the world is a hotly contested task, but I’ll go ahead and add my opinion: a good Emmentaler cheese is hard to beat. A master cheesemaker might joke that it’s really the holes that add the flavor, so if we’re going to build a good COMSOL Multiphysics model of a wheel of cheese, we need to include the holes.

*A model of Emmentaler cheese, with randomly positioned and sized holes.*

It turns out that the reasons for the holes in Swiss cheese are quite complicated, so we aren’t going to try to model the hole formation itself. Instead, we will simply set up a model of the cheese, as shown in the image above. We want to include a randomly distributed set of holes within the cheese, with a random hole radius between some upper and lower limit on the radius. We can build this randomized geometry in COMSOL Multiphysics version 5.3 using the new *Model Method* functionality. Let’s find out how…

When you’re running COMSOL Multiphysics® version 5.3 on the Windows® platform and working with the Model Builder, you will now see a *Developer* tab in the ribbon, as shown in the screenshot below. One of the options is *Record Method*. When clicked, this option prompts you to enter a new method *Name* and *Method type*. You can enter any string for the method name, while the method type can either be *Application method* or *Model method*.

An *Application method* can be used within a COMSOL app — a process introduced in this tutorial video. A *Model method* can be used within the underlying COMSOL Multiphysics model and can operate on (and add information to) the existing model data.

*The* Developer *tab, showing the* Record Method *and* Run Model Method *buttons.*

After you click the *OK* button in the *Record Method* dialog box, you can see a red highlight around the entire graphical user interface. All operations performed are recorded in this method until you click the *Stop Recording* button. You can then switch to the Application Builder and view your recorded method. The screenshot below shows the Application Builder and the method after we record the creation of a single geometry object. The object is a cylinder with the tag `cyl1`

, a radius of 40 cm, and a height of 20 cm — a good starting approximation for a wheel of cheese.

*The Application Builder showing code for a model method used to create a geometry.*

When we’re working with the Model Builder, we can call this model method within any other model file (as long as it doesn’t already have an existing object with tag `cyl1`

within the geometry sequence) via the *Run Model Method* button in the *Developer* tab. Of course, this simple model method just creates a cylinder. If we want to model the holes, we need to introduce a bit of randomness into our method. Let’s look at that next.

Within a model method, you can call standard Java® classes, such as the Math.random class, which returns a double-precision number greater than or equal to 0.0 and less than 1.0. We want to use this class, along with a little bit of extra code, to set up a specified number of randomly positioned and sized holes within the model of the wheel of cheese.

Let’s say that we want 1000 holes randomly distributed throughout the cheese that each have a random radius between 0.1 cm and 1 cm. We also need to keep in mind that Emmentaler cheese has a natural rind within which no holes form. So, we need to add a bit of logic to make sure that our 1000 holes are actually inside the cheese. The complete model method below (with line numbers added and text strings in red) shows how to do this.

1 int NUMBER_OF_HOLES = 1000; 2 int ind = 0; 3 double hx, hy, hz, hr = 0.0; 4 double CHEESE_HEIGHT = 20.0; 5 double CHEESE_RADIUS = 40.0; 6 double RIND_THICKNESS = 0.2; 7 double HOLE_MIN_RADIUS = 0.1; 8 double HOLE_MAX_RADIUS = 1.0; 9 model.component("comp1").geom("geom1").lengthUnit("cm"); 10 model.component("comp1").geom("geom1").selection().create("csel1", "CumulativeSelection"); 11 while (ind < NUMBER_OF_HOLES) { 12 hx = (2.0*Math.random()-1.0)*CHEESE_RADIUS; 13 hy = (2.0*Math.random()-1.0)*CHEESE_RADIUS; 14 hz = Math.random()*CHEESE_HEIGHT; 15 hr = Math.random()*(HOLE_MAX_RADIUS-HOLE_MIN_RADIUS)+HOLE_MIN_RADIUS; 16 if ((Math.sqrt(hx*hx+hy*hy)+hr) > CHEESE_RADIUS-RIND_THICKNESS) {continue; } 17 if (((hz-hr) < RIND_THICKNESS) || ((hz+hr) > CHEESE_HEIGHT-RIND_THICKNESS)) {continue; } 18 model.component("comp1").geom("geom1").create("sph"+ind, "Sphere"); 19 model.component("comp1").geom("geom1").feature("sph"+ind).set("r", hr); 20 model.component("comp1").geom("geom1").feature("sph"+ind).set("pos", new double[]{hx, hy, hz}); 21 model.component("comp1").geom("geom1").feature("sph"+ind).set("contributeto", "csel1"); 22 ind++; 23 } 24 model.component("comp1").geom("geom1").create("cyl1", "Cylinder"); 25 model.component("comp1").geom("geom1").feature("cyl1").set("r", CHEESE_RADIUS); 26 model.component("comp1").geom("geom1").feature("cyl1").set("h", CHEESE_HEIGHT); 27 model.component("comp1").geom("geom1").create("dif1", "Difference"); 28 model.component("comp1").geom("geom1").feature("dif1").selection("input").set("cyl1"); 29 model.component("comp1").geom("geom1").feature("dif1").selection("input2").named("csel1"); 30 model.component("comp1").geom("geom1").run();

Let’s go through this model method line by line:

1. Initialize and define the total number of holes that we want to put in the cheese.

2. Initialize and define an index counter to use later.

3. Initialize a set of double-precision numbers that holds the *xyz*-position and radius of each hole.

4–8. Initialize and define a set of numbers that defines the cheese height, radius, ring thickness, and maximum and minimum possible hole radius in centimeters.

9. Set the length unit of the geometry to centimeters.

10. Create a new selection set, with tag `csel`

and name ` CumulativeSelection`

. Note that if such a selection set already exists, the method fails at this point. You could also modify the method to account for this, if you want to run the method repeatedly in the same file.

11. Initialize a while loop to create the specified number of holes.

12–14. Define the *xyz*-position of the holes by calling the random method and scaling the output such that the *xyz*-position of the holes lies within the outer Cartesian bounds of the cheese.

15. Define the hole radius to lie between the specified limits.

16–17. Check if the hole position and size are such that the hole is actually outside of the cheese. If so, continue to the next iteration of the while loop without executing any of the remaining code in the loop. This check can be done in a single line or split into three lines, depending on your preference of programming style.

18. Create a sphere with a name based on the current index value.

19–20. Set the radius and position of the newly created sphere. Although the radius can be passed in directly as a double, the position must be specified as an array of doubles.

21. Specify that this sphere feature is part of (contributes to) the selection set named `csel1`

.

22–23. Iterate the index, indicating that a sphere has been created, and close the while loop.

24–26. Create a cylinder primitive that represents the wheel of cheese.

27–29. Set up a Boolean difference operation. The object to add is the cylinder primitive, while the object to subtract is the selection of all of the spheres.

30. Run the entire geometry sequence, which cuts all of the spheres out of the cylinder, forming the wheel of cheese.

We can run this method in a new (and empty) model file to create a model of a wheel of cheese. Each time we rerun the method, we will get a different model. The geometry sequence in the model file contains all of the spheres and the cylinder primitives as well as the Boolean operation.

If we want to, we could also add some additional code to our model method to write out a geometry file of just the final geometry: the cheese. This geometry file can be written in the COMSOL Multiphysics native or STL file format. We could also write out to Parasolid® software or ACIS® software file formats with any of the optional modules that include the Parasolid® software kernel. Working with just the final geometry after it has been exported and reimported is faster than working with the complete geometry sequence.

We can see the final results of our efforts below. Delicious!

*A model of a wheel of Emmentaler cheese, ready to be eaten.*

We’ve looked at a simple example of how to use model methods to create a geometry with randomly placed and sized features. There are some questions that we haven’t addressed here, such as how to ensure that the holes are not overlapping and how to come up with a close-packed arrangement, but these turn out to be difficult mathematical questions that are fields in their own right.

Of course, there is a lot more that you can do with model methods, which we will save for another day. There are also other ways to create a random geometry, such as by parametrically defining a surface.

- Read about another use of model methods: generating simulation results that you can see and hear
- Check out the new features in COMSOL Multiphysics® version 5.3 on the Release Highlights page

*ACIS is a registered trademark of Spatial Corporation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Parasolid is a trademark or registered trademark of Siemens Product Lifecycle Management Software Inc. or its subsidiaries in the United States and in other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.*

There are many ways to characterize a rough surface. One way is to use its approximate fractal dimension, which is a value between 2 and 3 for a surface. A surface of fractal dimension 2 is an ordinary, almost everywhere smooth surface; the value 2.5 represents a fairly rugged surface; and values close to 3 represent something that is close to “3D space filling”. Correspondingly, a curve of fractal dimension 1 is smooth almost everywhere, the value 1.5 represents a fairly rugged line, and values close to 2 represent something that is close to “2D space filling”.

*The range of fractal dimension values for curves going from 1 (left) to about 1.2 (center) and to 1.6 (right).*

Using a fractal dimension measure can be a useful approximation, but we need to remember that real surfaces aren’t fractal in nature over more than a few orders of magnitude of scale. Real surfaces have a spatial frequency “cutoff” due to their finite size and due to the fact that when “zooming in”, you will eventually hit some new type of microstructure behavior.

Another way of characterizing surface roughness is with respect to its spatial frequency content. This can be turned into a constructive method of synthesizing surface data by using a sum of trigonometric functions similar to a Fourier series expansion. Each term in such a sum represents a certain frequency of oscillation through space. This is the method that we will use here. Let’s quickly review the concepts of spatial frequencies and elementary wave shapes before moving on to trigonometric series.

In physics, the frequency of oscillations over time occurs in mathematical expressions like

cos(2\pi f t)

where the unit of the frequency *f* is 1/s, also known as hertz or Hz.

Oscillations through space have a corresponding spatial frequency, as in the following expression, where we simply have replaced the time variable *t* by a spatial variable *x* and the time frequency *f* with the spatial frequency *v*.

cos(2\pi \nu x)

where the SI unit of the spatial frequency is 1/*m*.

Spatial frequencies are commonly represented by a wave number *k* = 2*πv*.

A related quantity is the wavelength , which is related to the frequency and wave number as follows:

k=2\pi \nu=\frac{2\pi}{\lambda}

There may be more than one dimension of space and, accordingly, there may be multiple spatial frequencies. In 2D, using Cartesian coordinates, we have:

cos(2\pi (\nu_x x + \nu_y y))=cos(\bf{k} \cdot \bf{x})

where is the wave vector and .

The wave vector represents the direction of the wave.

A rough surface *f*(*x*,*y*) can be seen as composed of many elementary waves of the form

cos(\bf{k} \cdot \bf{x}+\phi)

where *φ* is a phase angle.

The phase angle also makes it possible to express sine functions due to the relationship .

For a completely random surface, it should hold that the phase angle *φ* can take any value in, say, the interval 0 to *π* or -*π*/2 to *π*/2. When synthesizing elementary waves for a random surface, we can pick *φ* from a uniform random distribution in such an interval of length *π*, since we then allow for the expression to span all possible values between -1 and +1. Note that there may be end-point or wrap-around effects if we choose an interval with a size bigger than *π*. This is due to the cosine function being its own mirror image in steps of *π*, according to .

In order to get an efficient representation that can be used for simulations, we will only allow for a discrete set of spatial frequencies:

*ν _{x}* =

where *m* and *n* are integers.

Let’s consider a surface that is composed of elementary waves of the following form:

cos(\bf{k}_{mn} \cdot \bf{x}+\phi)= cos(2 \pi (mx+ny)+\phi) , \bf{k}_{mn}=2\pi(m,n)

By letting *m* and *n* take both positive and negative values with equal probabilities, we should be able to get a method of synthesizing a surface with no preferred direction of oscillations.

Note that, in this way, each wave direction is represented twice. For example, the direction (-2,-3) is the same as (2,3); (2,-1) is the same as (-2,1); and so on.

If we allow the spatial frequencies *m* and *n* to take values up to maximum integers *M* and *N*, respectively, then this corresponds to a high-frequency cutoff at:

\nu_{xmax}=M, \nu_{ymax}=N

Since we also allow for negative values, there are negative cutoffs at:

\nu_{xmin}=-M, \nu_{ymin}=-N

Having a spatial frequency cutoff at in the *x* direction means that the shortest wavelength we can represent is , and similarly for the *y* direction, .

Each elementary wave will have an associated amplitude so that each constituent wave component has the following form:

A_{mn}cos(\bf{k}_{mn} \cdot \bf{x}+\phi)

The final surface will be a sum over such wave components:

f(\bf{x})=\sum_{m,n}A_{mn}cos(\bf{k}_{mn} \cdot \bf{x}+\phi)

The simplest choice of amplitude would be to choose the coefficients *A _{mn}* from a uniform or perhaps Gaussian distribution. However, it turns out that this will not generate a particularly natural-looking surface. In nature, different processes, such as wearing and erosion, make it more likely that slow oscillations have a larger amplitude than fast ones. In the discrete case, this corresponds to the amplitudes tapering off according to some distribution:

A_{mn} =a(m,n) \sim h(m,n)=\frac{1}{\vert m^2+n^2\vert^{\beta}}=\frac{1}{(m^2+n^2)^{\frac{\beta}{2}}}

where the spectral exponent *β* indicates how quickly higher frequencies are attenuated. Following *The Science of Fractal Images* (Ref.1), the spectral exponent can be related to the fractal dimension of a surface, but only for an infinite series of waves covering arbitrarily high frequencies and only for certain ranges of the exponent. In practice, the amplitudes *a*(*m*,*n*) of our synthesized surface will be generated using a limited number of frequencies, multiplied with a random function *g*(*m*,*n*) having a Gaussian distribution:

*a*(*m*,*n*) = *g*(*m*,*n*)*h*(*m*,*n*)

A Gaussian, or normal, distribution is chosen to get a smooth but random variation in amplitudes with no limit on the magnitude.

The phase angles *φ* will be sampled from a function *u* with a uniform random distribution between -*π*/2 and *π*/2:

*φ*(*m*,*n*) = *u*(*m*,*n*)

To represent our rough surface, we want to use the following double sum:

f(x,y)=\sum_{m=-M}^{M} \sum_{n=-N}^{N} a(m,n) cos(2 \pi(mx+ny)+\phi(m,n))

where *x* and *y* are spatial coordinates; *m* and *n* are spatial frequencies; *a*(*m*,*n*) are amplitudes; and *φ*(*m*,*n*) are phase angles. This expression is similar to a truncated Fourier series. Although the series is expressed in terms of cosine functions, the phase angles make it so this sum can express a quite general trigonometric series due to the angle sum rule:

cos(\alpha+\beta)=cos(\alpha)cos(\beta)-sin(\alpha)sin(\beta)

Due to its definition, the function *f*(*x*,*y*) will be periodic. In order to get a natural-looking surface, we should “cut out” a suitably small portion by letting *x* and *y* vary between some limited values; otherwise, the periodicity of the synthesized data will be apparent. What should these values be?

The overall periodicity will be determined by the slowest oscillations, which correspond to the spatial frequencies *m* = 1 or *n* = 1 in the *x* direction and *y* direction, respectively. This gives a period length of 1 in each direction.

We could generate the surface over a rectangle [*a*, *a* + 1] × [*b*, *b* + 1] or smaller in order to “avoid” the periodicity.

For the COMSOL Multiphysics implementation, start by defining a couple of parameters for the spatial frequency resolution and spectral exponent according to the following figure:

The amplitude generation will require a random function with a Gaussian distribution in two variables. This functionality is available under the *Global Definitions* node:

Here, the *Label* and *Function name* have been changed to *Gaussian Random* and *g1*, respectively. In addition, the *Number of arguments* is set to *2* instead of the default *1* and the *Distribution type* is set to *Normal*, which corresponds to a normal or Gaussian distribution.

In a similar way, for the phase angle, we need a uniform random function in the interval between -*π*/2 and *π*/2:

The *Label* is changed to *Uniform Random*, the *Function name* to *u1*, the *Number of arguments* to *2*, and the *Range* to *pi*.

You can optionally use random seeds to get the same surface each time you use the same input parameters.

The next step is to add a *Parametric Surface* node under *Geometry* using a fairly lengthy *z*-coordinate expression, as follows:

*0.01*sum(sum(if((m!=0)||(n!=0),((m^2+n^2)^(-b/2))*g1(m,n)*cos(2*pi*(m*s1+n*s2)+u1(m,n)),0),m,-N,N),n,-N,N)*

where *x* = *s1* and *y* = *s2* vary between 0 and 1.

The factor 0.01 is used to scale the data in the *z* direction. Alternatively, this scaling factor can be absorbed into the amplitude coefficients.

*A parametric surface geometry feature is used to generate a synthesized random surface.*

Note that whenever you update any of the parameters or expressions for the Parametric Surface, you need to click the *Rebuild with Updated Functions* button in the *Advanced Settings* section of the Settings window.

This expression is a double-sum over the integer parameters *m* and *n* each running from -*N* to *N*. If we compare this to the mathematical discussion earlier, we can see that we have set *M* = *N*, resulting in a square surface patch. The term where *m* and *n* are simultaneously zero corresponds to an unwanted “DC” term and is eliminated from the sum by the *if* statement.

The syntax for the *sum*() operator is as follows:

*sum(expr,index,lower,upper)*

which evaluates a sum of a general expression *expr* for all indices *index* from *lower* to *upper*.

The syntax for the *if*() operator is as follows:

*if(cond,expr1,expr2)*

for which the conditional expression *cond* is evaluated to *expr1* or *expr2* depending on the value of the condition.

In this example, the resolution of the parametric surface has been increased by setting the *Maximum number of knots* to 100 (the default is 20). In addition, the *Relative tolerance* is relaxed to 1e-3 (the default is 1e-6). The underlying representation of the parametric surface is based on nonuniform rational B-splines (NURBS). More knots correspond to a finer resolution of the NURBS representation. The tolerance is increased, since we are not overly concerned about the approximation accuracy of the generated surface for this example.

By generating a mesh, we can get a useful visualization of the surface, as seen in the figure below.

*A meshed random surface.*

Note that N = 20 means that the fastest oscillations are 1/20 = 0.05 m, assuming SI units. The periodicity in the *x* and *y* directions can be seen by following the curves parallel to the *y*- and *x*-axes at *x* = 0, *x* = 1 and *y* = 0, *y* = 1; respectively.

To see the periodicity even more clearly, we can plot the surface on the square [0,2] × [0,2]:

*The periodicity of the surface on the square [0,2] × [0,2]. The surface height is represented by color.*

*Surfaces generated on the square [0,1] × [0,1] by superimposing 20 frequency components with amplitude spectral exponents β = 0.5, β = 1.0, β = 1.5, and β = 1.8, clockwise from the top-left image. The surface height is represented by color.*

This type of randomly generated surface can, in COMSOL Multiphysics, be used in any kind of physics simulation context, including for electromagnetics, structural mechanics, acoustics, fluid, heat, or chemical analysis. The expression for the double sum is not limited for use in geometry modeling, but can also be used for material data, equation coefficients, boundary conditions, and more. Using model methods or application methods, a large number of surface realizations can be used in a loop to gather statistics of the results.

By generalizing the double-sum to a triple-sum, you can synthesize 3D inhomogeneous material data. However, you have to be prepared for long and memory-intensive computations when performing triple-sums for 3D simulations.

*A fracture flow simulation based on synthetically generated fracture aperture data. The Rock Fracture Flow tutorial model is part of the COMSOL Multiphysics Application Library.*

*A generic thermal expansion analysis of two 1-centimeter-sized metal blocks with a material interface based on the parametric surface described in this blog post. The bottom material slab is aluminum and the top material slab is steel. The visualization shows the von Mises stress at the material interface and on the surface of the aluminum slab.*

The sum

f(x,y)=\sum_{m=-M}^{M} \sum_{n=-N}^{N} a(m,n) cos(2 \pi(mx+ny)+\phi(m,n))

is similar to a discrete cosine transform or to the real part of a discrete Fourier transform:

f_c(x,y)=\sum_{m=-M}^{M} \sum_{n=-N}^{N} F_c(m,n)e^{i(2 \pi(mx+ny))}

where the subscript *c* is used to indicate complex quantities and *x* and *y* now take discrete values. Here, the phase angle information is encoded in the complex Fourier coefficients.

Due to the definition of the discrete Fourier transform, we are allowed to perform a shift in index in order to generate the following more familiar form:

f_c(x,y)=\sum_{m=0}^{2M} \sum_{n=0}^{2N} F_c(m,n)e^{i(2 \pi(mx+ny))}

or by using discrete values:

f_c(k,l)=\sum_{m=0}^{2M} \sum_{n=0}^{2N} F_c(m,n)e^{i(2 \pi(m \frac{k}{2M+1}+n \frac{l}{2N+1}))}

More commonly, the discrete Fourier transform is indexed like this:

f_c(k,l)=\sum_{m=0}^{\mathfrak{M}-1} \sum_{n=0}^{\mathfrak{N}-1} F_c(m,n)e^{i(2 \pi(m \frac{k}{\mathfrak{M}}+n \frac{l}{\mathfrak{N}}))}

where

\mathfrak{M}=2M+1, \mathfrak{N}=2N+1.

Note that in order to generate real-valued data, the Fourier coefficients need to fulfill conjugate symmetry relationships in order to eliminate the imaginary-valued contributions from sine functions. Using a sum of cosine functions (i.e., a cosine transform) avoids this problem.

A fast way of generating a large number of Fourier coefficients is to use a fast cosine transform (FCT) or fast Fourier transform (FFT). This could be done in another program and then imported to the COMSOL Desktop® user interface as an interpolation table. The trigonometric interpolation method described above is slower, but has the advantage that it can be used directly on an unstructured mesh and is automatically refined by simply refining the mesh in the user interface.

For a description of using FFT for synthesizing surfaces, see Ref.1.

Let’s conclude with a few interesting, special cases of random surface generation in COMSOL Multiphysics, including curves and cylinders.

In a 2D simulation, a random curve can be generated using the following expression:

0.01*sum(if((m!=0),((m^2)^(-b/2))*g1(m)*cos(2*pi*m*s+u1(m)),0),m,-N,N)

where g1 and u1 are 1D random functions.

Note that when generating a curve, the spectral exponent will have a lower value as compared to that of a surface for the “same level of randomness”.

*A randomized curve with spectral exponent 0.8.*

A randomized curve in polar coordinates representing random deviations from a circle can be generated:

x=cos(2*pi*s)*(1+0.1*sum(if((m!=0),((m^2)^(-b/2))*g1(m)*cos(2*pi*m*s+u1(m)),0),m,-N,N))

y=sin(2*pi*s)*(1+0.1*sum(if((m!=0),((m^2)^(-b/2))*g1(m)*cos(2*pi*m*s+u1(m)),0),m,-N,N))

This corresponds to a parametric curve in 2D polar coordinates:

x=r(\phi) cos(\phi)

y=r(\phi) sin(\phi)

*A randomized polar curve with spectral exponent 0.8.*

A randomized cylinder in 3D can be generated using a parametric surface with parameters as follows:

x=cos(2*pi*s1)*(1+0.1*sum(sum(if((m!=0)||(n!=0),((m^2+n^2)^(-b/2))*g1(m,n)*cos(2*pi*(m*s1+n*s2)+u1(m,n)),0),m,-N,N),n,-N,N))

y=sin(2*pi*s1)*(1+0.1*sum(sum(if((m!=0)||(n!=0),((m^2+n^2)^(-b/2))*g1(m,n)*cos(2*pi*(m*s1+n*s2)+u1(m,n)),0),m,-N,N),n,-N,N))

z=s2*2*pi

where the parameters *s1* and *s2* vary between 0 and 1.

This corresponds to a parametric surface in cylindrical coordinates:

x=r(\phi,z) cos(\phi)

y=r(\phi,z) sin(\phi)

z=z

Such a single-piece random cylinder represents a type of self-intersecting surface that is not allowed in COMSOL Multiphysics. You can easily get around this by, for example, creating four surface patches corresponding to the parameter *s1* varying from 0 to 0.25, 0.25 to 0.5, 0.5 to 0.75, and 0.75 to 1.0. One such patch corresponds to a polar angle span of size .

*A randomized tubular surface using polar coordinates.*

*The Science of Fractal Images*, Editors: Peitgen, Heinz-Otto, Saupe, Dietmar. Eds.

We often want to stop a time-dependent or parametric solver when a certain condition is met or violated. But we usually do not know the exact time or parameter value at which the stop condition is going to occur. In this case, we need to specify a solution time or parameter range large enough so that we are reasonably certain that the stop condition is going to be activated. We then add a stop condition to terminate the solver.

Physical conditions or computational issues can trigger a stop. Examples of such physical conditions include allowable stresses, temperature limits, and species depletion in reactions. As for computational issues, one example is a very small time step size taken by the solver.

In COMSOL Multiphysics, a stop condition can be added to the following solvers:

- Time-dependent solver
- Frequency-domain solver
- Stationary solver with a parametric or auxiliary sweep

The first step is to define a scalar that we can use in a relational or logical expression. This scalar can either be a quantity defined at some point of interest or a global quantity, such as an integral, maximum value, minimum value, or average of a variable over domains or boundaries. In the software, we can define this using a point probe or a component coupling. The second step is adding a stop condition to the solver configuration. The condition has to be a statement that evaluates to true or false.

To demonstrate how to set up a stop condition, let’s look at a time-dependent heat transfer problem. We will tell the software to stop the computation when the maximum temperature exceeds a threshold value. (We can also use different conditions, but the procedure remains the same.)

Consider a silicon wafer heated by a moving heat pulse and cooled via radiation to the surroundings. (For more details, check out the tutorial model.) In this case, we want to modify the model such that the computation stops when a threshold temperature is reached.

*The temperature distribution on a rotating wafer heated by a moving heat pulse.*

To start, we define a scalar by using integrals, averages, maximums, or minimums evaluated over geometric entities. If we are interested in what happens at a specific point, a point probe or an integration component coupling can be used. In this example, the scalar that we monitor is the maximum temperature. This can be obtained using the *Maximum* component coupling.

*Steps for adding the Maximum component coupling.*

When we compute the study for the first time, we use the *Show Default Solver* command to open the settings. Then, we can go to the *Time-Dependent Solver* node under *Solver Configurations* and add a *Stop Condition* node.

*A* Stop Condition *node can be added to the* Time-Dependent Solver *node in the solver configurations.*

The final step is to add the expression and conditions in the *Stop Condition* Settings window.

*The Stop Condition Settings window for a time-dependent solver.*

The stop condition above stops the solver when the maximum temperature is greater than or equal to 250°C.

The solver stops after 27.238 seconds because the stop condition has been satisfied (even though we asked it to compute up to 60 seconds in the study settings). We can see the cutoff time in the *Warnings* node that the solver adds in the solver sequence.

*By default, the solver adds a* Warnings *node if the computation is terminated by a stop condition.*

In the Stop Condition Settings window, we use the *comp1* name scope to identify both the maximum operator and the temperature variable *T*. These are items defined under *Component 1*, whereas the solver sequence is a global item. For example, if we have a second component in our model and redefine *maxop1* there, the solver sequence cannot tell which operator we are referring to. Thus, we must use the component identifier.

Note that we can choose whether or not we want to stop when the condition is true or false. Additionally, in the *Output at Stop* section, we can decide if we want to add a time step just before or after the stop condition is met.

If we have an *Event* interface in the Model Builder, we can use it as a stop condition by adding it under *Stop Events* in the *Stop Condition* Settings window.

The time-dependent solvers in COMSOL Multiphysics are adaptive. As such, the time steps are picked based on user-specified error tolerances and computed local error estimates. When the error estimates are very high, the software takes smaller and smaller steps. For example, this happens when the solution becomes singular. If we want to stop the solver when the time steps are too small, rather than trying to approach this singularity with increasingly smaller and smaller time steps, we can add a stop condition based on the reserved variable *timestep*.

*Since* timestep *is a predefined global parameter, it is recognized without a name scope.*

Stop conditions can be added to frequency-domain studies as well as parameterized stationary studies. Parameterized stationary studies can either be regular parametric sweeps or auxiliary sweeps. In all of these cases, the *Stop Condition* node should be added to the *Parametric* node under *Solver Configurations* > *Stationary Solver*.

*Stop conditions can be added in stationary analyses when performing an auxiliary or parametric sweep.*

Note that, as in the time-dependent problem, we need a scalar variable to monitor and must use the right variable scope in the stop condition.

To see how to use a stop condition in an auxiliary sweep to implement a nonstandard load ramping procedure, check out the example model of a postbuckling analysis of a shell structure.

Expressions used in the stop condition and other items in the solver configuration have to be global to be automatically recognized. Otherwise, we have to provide the component name as a prefix. This is the case for every variable (including physics variables like *T* for temperature) and functions inside a component. On the other hand, parameters defined under *Global Definitions* in the Model Builder are recognized without the component prefix.

For example, when referring to the built-in integration operator, we can simply use *integrate*. In contrast, when referring to the integration operator *myint* that is defined in component *comp1*, we have to use *comp1.myint*.

The following predefined constants, variables, functions, and operators can be used in the solver configuration without identifying a component:

- Physical constants, such as the gravitational acceleration
*g_const* - Mathematical constants, such as pi
- Built-in global variables, such as
*t*for time and*freq*for frequency - Built-in mathematical functions, such as trigonometric and exponential functions
- Built-in operators, such as differentiation and integration operators

See the *COMSOL Reference Manual* for a full list and syntax.

In this blog post, we have discussed how to add conditions that stop a time-dependent or parametric solver when one or more criterion is met. If you have any questions related to this topic or using COMSOL Multiphysics, contact us via the button below.

- Learn about other ways you can improve your modeling process:

We sometimes get warnings and errors while meshing models. When this happens, we should inspect the entities listed in the *Warning* and *Error* nodes. Most warnings are caused by using mesh settings that are too coarse, preventing thin regions and short edges from being resolved properly.

To find these geometric entities, we can use the *Zoom to Selection* button in the *Warning* node. Toggling off the *Mesh Rendering* button and toggling on the *Wireframe* button for 3D meshes lets us easily see the entities reported inside the geometry. We can gain further insight into the issue by using the *Measure* button from the *Geometry* or *Mesh* toolbars for selected entities, for example, to get the length of edges or distance between points.

With our measurements and information from the *Warning* node, we can then set up *Virtual Operations*, or *CAD Defeaturing*, to eliminate the small geometric entities or reduce the mesh size if the features are important for the simulation.

*A mesh of an airplane (left), where some interior boundaries are indicated as being too narrow to be properly resolved by the current mesh settings. The same boundaries, highlighted in blue, after clicking the* Mesh Rendering *and* Wireframe *buttons (right).*

An *Error* node referring to a coordinate will have a button that enables us to zoom in on the coordinate. A small red sphere will appear around the coordinate so that the particular region can be studied in detail. When a warning indicates that one or more low-quality elements have been generated, it requires some special attention. We can check the *Minimum element quality* in the *Statistics* window and plot the mesh elements of the worst quality (explained in further detail later in this post).

If we have negative mesh quality or values very close to zero, it indicates that the reported mesh elements are inverted or nearly inverted. Note that inverted linear mesh elements, which we discuss here, are not the same phenomenon as inverted higher-order elements that you may run into when solving. The inverted linear mesh elements must be avoided to achieve convergence and accurate results.

One way to quickly get an overview of the created mesh is to have a look at the statistics in the *Mesh Statistics* window, which we open by right-clicking the *Mesh* head node.

*The Mesh Statistics window, showing a wide variety of statistics for different selections and quality measures.*

It is possible to change the selection of domains, boundaries, or edges for which we display the numbers. For this, we use the *Geometric Entity Level* drop-down menu at the top of the window. The *Quality Measure* menu lets us choose from a list of quality measures, including:

- Skewness
- Maximum angle
- Volume versus circumradius
- Volume versus length
- Condition number
- Growth rate

The *Skewness* measure is a suitable measure for most types of meshes; hence, it is the default measure. This quality measure is based on the equiangular skew that penalizes elements with large or small angles as compared to the angles in an ideal element. This quality measure is also used when reporting bad element quality during mesh generation. With the *Maximum angle* measure, only elements with large angles are penalized, making this option particularly well suited for meshes where anisotropic elements are desired, such as boundary layer meshes.

*Volume versus circumradius* is based on a quotient of the element volume and the radius of the circumscribed sphere (or circle) of the element. This quality measure is sensitive to large angles, small angles, and anisotropy. For triangular meshes in 2D and tetrahedral meshes in 3D where isotropic elements are desired, *Volume versus circumradius* is a suitable measure. On the other hand, *Volume versus length* is based on a quotient of element edge lengths and element volume. This quality measure is primarily sensitive to anisotropy.

The *Condition number* quality measure is based on properties of the matrix transforming the actual element to an ideal element. Lastly, *Growth rate* is based on a comparison of the local element size to the sizes of neighboring elements in all directions.

For all quality measures, a quality of 1 is the best possible and it indicates an optimal element in the chosen quality measure. At the other end of the interval, 0 represents a degenerated element. Although the meshing algorithms in COMSOL Multiphysics try to avoid low-quality elements, it is not always possible to do so for all geometries. High geometric aspect ratios, small edges and faces, thin regions, and highly curved surfaces may all lead to poor-quality meshes. When the geometry does lead to a poor-quality mesh, the mesher returns the poor-quality mesh for examination, rather than no mesh at all.

Depending on the quality measure used, the *Minimum element quality*, *Average element quality*, and the *Element Quality Histogram* sections will change accordingly. To get accurate results, it is important to know which *Minimum element quality* and *Average element quality* are sufficient for your particular application.

There are no absolute numbers to present for what the quality should be, as the physics and solvers used will have different requirements on the quality needed. In general, elements with a quality below 0.1 are considered as poor quality for many applications. The mesher will automatically present a warning for elements with a quality below 0.01, as these are considered to be very low quality and should be avoided in most cases. In some cases, a couple of low-quality elements may be okay if they are located in a part of the model with less importance, while in other cases, one low-quality element may lead to convergence problems.

The histogram in the Mesh Statistics window will give us a visual of the quality of the mesh, which can be a quick way to see if we need to change the overall mesh sizes in some way.

To understand where low-quality elements are positioned and which mesh size parameters to change, it can be a good idea to perform a plot of the mesh. We do this either by clicking the *Plot* button in the *Mesh* ribbon or by right-clicking the *Mesh* head node of the mesh we would like to plot and selecting *Plot*. This gives us a *Mesh* data set, available under *Results* > *Data Sets*, under which we can add *Selections* to narrow down the amount of entities shown in the plot. The *Mesh* plot feature can also be combined with other plot features.

We can gain a general understanding of how a specific mesh is set up by the different types of mesh elements. To do so, we set *Level* to *Volume*, choose an *Element Type* from the list, and set a uniform *Element Color* for this element type. To duplicate the *Mesh* plot feature node, we select another *Element Type* and *Element Color*. Then, we repeat the process until we have colored all of the available element types in the mesh. In the image below, the elements shrink by setting the *Element Scale Factor* to 0.8.

*A colorful representation of the different element types in a mesh. The tetrahedrals are shown in cyan, the pyramids in magenta, and the prisms in gray. To understand more about how the elements are connected, they shrink with a factor of 0.8.*

As we already mentioned, it can be important to understand where the elements of poor quality are located. This will help us understand if the geometry needs to be changed in any way or if the mesh-size parameters need to be modified to better handle the problematic area.

We can start by setting *Level* to *Volume* and in the *Element Filter* section selecting the *Enable Filter* check box. Then, we enter a Boolean expression, which reflects the elements we want to check. In the image below, the elements with a *Skewness* that is below 0.05 are shown. We can use the *Replace Expression* feature to easily access the names of the different quality measures. These measures can be used to spot different weaknesses in the generated mesh, so we should make sure we check all of them to see which is best used for our particular meshes.

*The volume elements with a* Skewness *below 0.05 are displayed for the Shell-and-Tube Heat Exchanger model. In front of the Graphics window, the Replace Expression window gives easy access to the different quality measures.*

Among the new quality measures, the *Growth Rate* is a bit different, as it shows a relation between two mesh elements, whereas the other quality measures display the quality of the shape of each single mesh element. The growth rate evaluates toward a maximum of 1 in regions of the mesh where the elements are constant in size. It is lower in regions where the element growth rate increases from one element to the next. The most important things to plot are often happening inside the mesh of a domain, while it can be good to add a filter expression including the space dimensions. Here is one such example:

*The growth rate displayed for the mesh of the Biconical Antenna model. The plot shows that the boundary layer mesh in the PML domains are of similar size, while the growth rate shifts more in the tetrahedral mesh in the middle domains. In this example, the mesh elements for where* y *> 0.1 mm are shown by using the* Element filter *option. The slice plot shows the Electric field norm (dB).*

We have discussed three different ways of inspecting a mesh, which can be used to spot regions with low-quality mesh elements. Now that we know how to find out where the low-quality mesh elements are, we can either manually adjust the mesh in these regions or address the issues with the underlying CAD geometry itself. To learn about modifying CAD geometries for meshing purposes, see the following blog posts:

- Working with Imported CAD Designs
- Using Virtual Operations to Simplify Your Geometry
- Improving Your Meshing with Partitioning

If you want to evaluate the meshing capabilities of COMSOL Multiphysics for your own modeling needs, please contact us.

]]>