One possible format when working with scanned data is text files with coordinate data from images of slices produced by an MRI or CT scan. In this example, let’s look at a case where we have a number of files or cross-sectional coordinates from different planes of a human head. Each coordinate file represents a curve of the outer surface of the head in that particular plane.
In short, the procedure includes:
Now, let’s look at each step in more detail.
To be able to import a text file in the Interpolation Curve feature, the coordinates need to be organized in the Sectionwise format. This is a native format of COMSOL Multiphysics in which the text file is organized to format one section with coordinates, one with the element connectivity, and one with data columns. The first two sections are needed here, while the data columns can be omitted when using this format for geometry creation. Below is an example of a file in the Sectionwise format:
%Coordinates One to three columns containing x, y (optional), and z (optional) %Elements Triangulation where each row contains the row indices of the points in the Coordinates section that make up one element (triangular in 2D, tetrahedral in 3D) %Data (funname) Column of data values for each point
In this example, we have 17 text files containing coordinate data from the 3D object. One Interpolation Curve feature per text file is added, which gives 17 curve objects in total. The Closed curve setting is used to ensure that the created curve objects are indeed closed and that the first- and second-order derivatives are continuous everywhere. In COMSOL Multiphysics, lofting a closed curve produces a solid object, while lofting an open curve produces a surface. The Relative tolerance is increased to 0.001 or 0.01 to produce a smoother representation of the curve. With the default tolerance (which is 0), the curves have a more jagged shape. In this example, the top of the head is represented by a point.
The Settings window for the Interpolation Curve feature (left) and all of the curves representing the outer shell of a head (right). The Relative tolerance is increased to 0.001 or 0.01 to produce a smoother representation. The Closed curve setting ensures that the curve becomes closed and has continuous first- and second-order derivatives everywhere.
Now that we have the curve objects that define the outline of each cross section of the head, we can create the solid shape with the Loft operation. The Loft operation is one of the geometry modeling tools included in the Design Module, which you can read about in this introductory blog post. Before setting up the Loft operation, we need to make sure that the curve objects are suitable as profiles for lofting. Lofting curves or surfaces to a solid requires the different profiles to have the same number of edges and points. The exception is the first and last objects (called the start and end profiles), which can be points, as is the case for the top of the head in this example.
A closed interpolation curve has two vertices, but it is not possible to choose their positions. So, the criterion mentioned above — to have the same number of edges and points of the intermediate objects — is already fulfilled, as all of the created curves have two edges. However, where these points are placed on the profile objects is also important. When the loft works its way through the curves, it connects all of the points with edges in the direction of the loft. If the points are not positioned along a fairly straight line, the resulting surfaces might become distorted. Therefore, we often need to partition the edges further to accomplish a good representation of all of the surfaces. To do this, we can use two different procedures: creating curves from data and lofting them into an object.
How to partition the edges and which features to use for this purpose is not an exact science, but something for which you use some trial and error and decide based on what looks best after a visual inspection. Here, we use the Partition Objects and Partition Edges features. The advantage of using the Partition Objects operation is that this option allows multiple curve objects to be partitioned at locations defined by the points of intersection with a selected plane. As some of the interpolation curves already contain points that are fairly well aligned on the front and back of this human head example, a Work plane is added at y = 0 to create more points along the same imaginary lines.
Partitioning some of the curves using a Work plane at y = 0. The settings for the Partition Objects feature (left). The curve objects highlighted in blue (right) are partitioned at two places using the work plane, which is pictured as a gray sheet with a rectangular grid.
The Partition Edges feature partitions selected edges based on either specified relative arc lengths or by projecting one or several vertices. As we want the vertices to line up fairly well when lofting curves, projecting vertices is a good option. However, for some edges, it is better to specify a relative arc length to have more control over where the vertex is created.
The Settings windows for the Partition Edges features, showing both the Vertex projection (left) and the Arc length (middle) specification types, and the edges selected for a vertex projection (right).
To verify that the geometry objects have the same amount of edges and points, we click the Select Objects button above the Graphics window, select a curve object in the Graphics window, and then click the Measure button in either the Geometry or Mesh tab. The output of this measurement is written to the Messages log.
Now that the points are roughly aligned, it is time to create the solid. The Loft feature contains many options, but we only use the most straightforward procedure here: adding all of the curve objects and the point on the top of the head in the Profile objects list. The start and end profiles are determined automatically by the Loft operation. As shown in the left image below, there are many collapsed sections (highlighted in blue) that can be used to fine-tune the loft; for example, to specify the direction of the loft. The collapsed sections are not used in this example.
The Settings window of the Loft operation (left), showing the input Profile objects, which is the only input used in this example. The right image shows the resulting solid head.
A surface or solid object lofted from closed continuous profile curves has at least two seams that go through the vertices of the profile curves, creating two face partitions. More seams may be introduced by the operation, depending on the alignment of the vertices on the different curves. If the profile curves have discontinuous tangents, additional seams are introduced and go through these points. No additional seams are introduced when using the default setting Face partitioning: Minimal in the Loft operation (see the image above), as is the case in this example.
If we want the lofted surface to be more partitioned — for example, to assign boundary conditions — the partitioning options Column and Grid can be used. The first option divides the surface along each vertex in the profile curves, while the latter also adds the profile curves. Yet another possibility is to use the different Partitions operations available in the Geometry ribbon. On the other hand, if we want to have a more clean appearance, we can use Virtual Operations to create composite faces. Ignore Edges is one of the features that can be used for this purpose, but Form Composite Faces also gives the same end result.
By adding the edges shown in the previous image to an Ignore Edges operation (left), the final geometry gets a smooth and nice look (right).
This blog post has discussed the possibility of creating curves from coordinate data and then lofting these curves into a solid object. Upcoming posts in this blog series will discuss other possible formats as well as approaches for handling irregular shapes in COMSOL Multiphysics.
Download the file used to create the example featured here:
NASTRAN is a registered trademark of NASA.
]]>
Consider the following story. A large industrial reactor showed unexpected behavior in its bubbly flow compartment. Due to very costly downtimes — and difficulty in getting good measurements — the engineering department made a CFD model to analyze the problem. The engineer creating this model had to include a bidirectional coupling, as gas bubbles affected the water pressure and turbulence and water influenced the creation and transport of the gas. As such, the model became very nonlinear.
The engineer started by using a Bubbly Flow interface on the 3D CAD model of the reactor section. All of the relevant boundary conditions, material properties, and mesh were defined, but the engineer ran into convergence issues. The model had several inlets, outlets, narrow regions, gas creation sites, and more. Since the time-dependent study took about seven hours to compute, finding the cause of the trouble was an arduous task. Meanwhile, the deadline kept creeping closer and closer.
What would you do in this situation?
Sloshing inside a vehicle’s fuel tank. This example model involves computing the velocity and pressure for both the gas and liquid simultaneously. Such CFD computations in 3D are extremely valuable, but also have long computation times.
Before continuing the story, let’s discuss (in a general way) what could have happened, because there are already lessons to be learned. There are many situations where a 3D model with complicated physics is required. A common approach is to take all of the expected “ingredients”, put them together, and reach your goal. But what if something unexpected happens? What if the model doesn’t converge? Now you’re in a tricky situation, of which any of the following factors could be the cause:
A full 3D model might take anywhere from minutes to several hours to run. Thus, every mistake in a complex model takes longer to find out about. Moreover, you can’t be sure what causes the issue. A more reliable approach is to use the following steps:
This workflow is extremely efficient, because if you run into trouble, you can precisely pinpoint the cause and fix it quickly.
The File menu in COMSOL Multiphysics, showing where you can access the Application Library.
The Application Library, showing part of the results when typing “turbulent” in the search box. Every example model and app in the Application Library includes documentation and step-by-step instructions.
Let’s go back to the industrial reactor model. The COMSOL Support team recommended making a simplified 2D model. Since this model required only five minutes to run, the engineer quickly identified the culprit: One outlet pipe was cut off in the CAD model. The short outlet caused a high vortex area to conflict with the boundary condition, which was fixed at a constant pressure. A short outlet is normally not a problem, but here, the combination of the outlet vortex with 3D turbulent bubbly flow and a less-than-optimal mesh made it a showstopper. The model ran smoothly after simply extending the outlet with an Extrude geometry operation.
A simplified 2D model that demonstrates the physical phenomenon is shown below.
Left: The original situation. A streamline plot of the velocity shows that the outlet cuts off a vortex in the flow. Right: The velocity profile with the extended outlet.
A close-up view of the velocity profiles in the outlet region. Left: The original, cutoff outlet. Right: The extended outlet. The outlet extension is hidden in this image to make a good comparison.
Computers are always getting faster, so when you have easy-to-use multiphysics software, it’s tempting to include all of the details in your model at once. After all, more detail means more accuracy. However, accuracy comes at the price of an increased computation time. Therefore, each modeling project should be preceded by an assessment of the required accuracy versus the time budget. This important step is often forgotten or underestimated. Think about the following scenarios and how they differ in demands:
If you start out with an unrealistic goal given the time budget — or worse, no well-defined goal — then you risk running into trouble.
In many cases, simulation projects do require high accuracy. When modeling real-world situations, multiphysics phenomena often have to be taken into account, typically in 3D CAD geometries. COMSOL Multiphysics covers the full spectrum of computational demands, from straightforward to complex. This gives you the freedom to maneuver anywhere between these extremes. Once you have determined your goal, you can outline a strategy on how to get there.
Left: A static load analysis of a bracket. Right: A coupled acoustic-mechanical analysis of a gearbox. Both analyses can be performed in the same interface and workflow when using COMSOL Multiphysics.
By using the steps outlined in this blog post, you will have full confidence in reaching your modeling goals before a deadline. Notice that this approach is both simple and straightforward. However, when a complication occurs, it’s easy to get stuck in the details and lose sight of the bigger picture. This can happen, even with experienced simulation engineers. One of the most important modeling skills is the ability to isolate a problem and reduce it to the very essentials when needed.
Want to evaluate the COMSOL® software for your unique needs? Contact us via the button above.
Browse more blog posts about solving models in COMSOL Multiphysics:
]]>
The boundary element method (BEM) is complementary to the finite element method (FEM) and is generally available in the COMSOL Multiphysics® software as of version 5.3. There are three different types of interfaces that are based on BEM, summarized in the table below:
Interface | Applicable Physics | Products with Interface | Models Wires? |
---|---|---|---|
Electrostatics, Boundary Elements | Electrostatics in 2D and 3D | AC/DC Module | Yes |
Current Distribution, Boundary Elements | Currents in electrochemical applications in 2D and 3D | Electrodeposition Module, Corrosion Module | Yes |
PDE, Boundary Elements | Laplace’s equation in 2D and 3D | COMSOL Multiphysics (no add-on product required) | No |
These interfaces are quite similar. Although this blog post focuses on the interface for electrostatics, some of the techniques shown here are applicable if you are interested in the other two interfaces.
In contrast to FEM, BEM doesn’t require the generation of a robust volumetric mesh throughout your computational domain, which can be difficult and resource-intensive to achieve. BEM eliminates the problem by only requiring a surface mesh, which is significantly easier to generate. However, this advantage comes at a price. The COMSOL Multiphysics implementation of BEM cannot be used to model, for example, nonlinear or general inhomogeneous materials. The table below summarizes the pros and cons of BEM and FEM in the COMSOL Multiphysics implementation.
Modeling Task | Using BEM | Using FEM |
---|---|---|
Infinite domains | Easy | Requires infinite elements or an approximation of an infinite domain through using large enclosing truncation domains |
Postprocessing at arbitrary distances | Easy | Requires recomputing with a larger truncation domain |
Wires | Easy, can be modeled with curves | Requires meshing the diameter of the wires to avoid mesh-dependent solutions |
Volume mesh | Not required | Required |
Isotropic materials | Easy | Easy |
Anisotropic materials | Not available | Easy |
Nonlinear materials | Not available | Easy |
By combining domains modeled with FEM and regions modeled with BEM, you can get the best of both worlds. For example, you can have one domain with an anisotropic material modeled with the traditional Electrostatics interface in the AC/DC Module and a surrounding isotropic domain modeled with the new Electrostatics, Boundary Elements interface.
To illustrate using the Electrostatics, Boundary Elements interface, let’s create a simplified model of an electrostatic precipitation filter. This type of filter is used in various industrial settings to filter particles from, for example, exhaust gases from coal power plants. An array of high-voltage wires creates a corona discharge region surrounding them, which in turn charges the unwanted particles. The charged particles then migrate in the electric field toward grounded metal plates (the collecting electrodes) and are periodically scraped off when the layer of particles becomes so thick that it deteriorates the performance of the filter.
Simulating the entire physical process of corona discharge, ionization, and charged particle migration is complicated and beyond the scope of this blog post. Instead, let’s look at the filter from a purely electrostatics perspective. This keeps the model simple, yet quite general, and illustrates a modeling approach that is applicable to a wide range of other electrical devices. If you want to know more about the details of modeling an electrostatic precipitation filter, see page 21 of COMSOL News 2012.
The filter in this example consists of 6 ground plates and 60 wires, as shown in the figure below. The wires are modeled as parametric curves and held at 50 kV.
The electrostatic precipitation filter example.
In the real case, this filter would be held in a frame, which we have neglected here to keep things simple. We assume that the space between and outside the plates is filled with air, which is the only material property in this model. In this example, we study this component as “hanging in midair” to get its idealized electrostatics properties. To assign air to the model, notice that there is no domain to click. Instead, you select the air region surrounding the model by selecting All voids from the selection list in the Settings window for the Air material. The only available void in this example is called the Infinite void and represents the region between the plates all the way “out to infinity”, as shown below.
For more information on the difference between solid domains and voids, see the Release Highlights page.
The settings for the Air material.
Selecting the Infinite void in this way is all that has to be done to model an infinite region when using boundary elements. Had we modeled this with FEM, then we would have needed to enclose the geometry model in a finite-sized box (or some other shape). To increase the accuracy of the computation, we would have also needed to add extra layers with Infinite Element domains surrounding the box.
The boundary conditions are set at two levels: for boundary surfaces and for edges. The figure below shows the Ground boundary condition assigned to the ground plates.
The settings for the Ground boundary condition.
There is a more interesting condition on the wires. They are assigned a Terminal edge condition with the Terminal type set to Voltage at 50 kV. In addition, the Edge radius is set to 1 mm, as shown in the figure below.
The settings for the Terminal edge condition.
Notice how the radii of the wires are entered into the model on the physics side and not on the CAD side. The CAD model of the wires consists of parameterized curves that have no radial extent but are (mathematically speaking) one-dimensional objects. This modeling approach shows a major benefit of BEM. If the model had been set up with the finite-element-based interface for electrostatics, then the wires would have to be modeled as thin spiral-shaped tubes with a finite-sized radius, thereby generating a mesh with many elements. Although possible, BEM is much more convenient.
FEM produces large sparse matrices, whereas BEM generates large filled matrices. This calls for solvers specialized in manipulating such. As a matter of fact, the system matrix produced by BEM is so heavy to handle that it cannot even be formed in its entirety. Instead, only those parts of the matrix that are needed by the solver for the moment are generated. More specifically, only the matrix-vector multiplications needed for the moment are performed. The method implemented in COMSOL Multiphysics for fast matrix-vector multiplications is called the adaptive cross approximation method and is used automatically when you are using one of the BEM interfaces. If you are interested, you can read more about the related solver options for version 5.3.
On one hand, BEM requires fewer degrees of freedom in order to produce accurate results as compared with FEM. On the other hand, BEM is more computationally demanding, so in the end, the methods are comparable with regard to computational demand versus accuracy.
For finite-element-based models, the computed fields in the modeled volume are visualized using slice plots, isosurface plots, arrow plots, flux lines, etc., by means of the volumetric finite element mesh. When using BEM, there is no volumetric mesh available, so in order to visualize spatially varying fields, a regular grid is used as a substitute. The regular grid is defined as a Grid 3D data set and lets you define a rectangular box with maximum and minimum values of its extents in the x, y, and z directions. In addition, the x-, y-, and z-resolution settings correspond to the element size and determine the granularity of the visualization. In the figure below, the resolution is set to 100 by 200 by 200, which corresponds to 4 million hexahedral grid elements.
The settings for the Grid 3D data set.
Boundary element fields can be quite heavy to postprocess and visualize and it can be a good idea to turn off Automatic update of plots. The corresponding check box is available in the Settings window for the Results node, as shown below.
The check box for Automatic update of plots in the Result node settings.
The visualization below shows the electric potential field around the wires, between the plates, and surrounding the plates. By increasing the size of the Grid 3D box, you can extend the visualization to a larger volume without having to recompute the solution. This is another benefit of BEM, since with FEM, you would need to enlarge the truncation domain and recompute.
The electric potential field for the electrostatic precipitation filter example.
A limitation with BEM is that each modeling domain is required to have a constant and isotropic material property. In the case of electrostatics, each domain must have a constant permittivity. You can create models with several domains of different permittivity values. The figure below shows a MEMS capacitor model with two permittivity values:
To make this type of model possible, each distinct dielectric domain needs to have its own Charge Conservation node added under the Electrostatics, Boundary Elements interface. Within each Charge Conservation domain, or group of domains, the permittivity is a constant. The capacitance of this type of device is computed using the predefined variable for capacitance under Derived Values, just like the corresponding finite-element-based model would.
A MEMS capacitor model example with multiple permittivity values.
COMSOL Multiphysics version 5.3 comes with a predefined multiphysics coupling that combines finite-element-based and boundary-element-based electrostatics. The figures below show another version of the MEMS capacitor model, where the dielectric material is replaced by an anisotropic piezoelectric material (PZT-5H). Since the Electrostatics, Boundary Element interface doesn’t allow for simulating anisotropic materials, the traditional finite-element-based Electrostatics interface is used in that region.
In addition, a small box surrounding the capacitor is also modeled using the finite-element-based interface. In this example, the Electrostatics, Boundary Elements interface is only active in the exterior infinite void. The coupling between the finite element region and boundary element region is defined under the Multiphysics node in the settings for Boundary Electric Potential Coupling.
The settings to combine finite-element-based and boundary-element-based electrostatics.
The figure below shows the electric potential visualized in both the finite element and boundary element regions. Some numerical artifacts due to interpolation can be seen in the junction between the finite element and boundary element domains. Such artifacts vanish when computing using a finer mesh and visualizing using a higher resolution for the Grid 3D data set.
The electric potential field for the MEMS capacitor model example.
You can download the example models highlighted in this blog post by clicking the button below.
Two other boundary-element-based electrostatics tutorials are available in the Application Gallery:
The capacitive position sensor demonstrates the use of the Electrostatics, Boundary Elements interface in combination with a Deformed Geometry interface for modeling large geometrical displacements. In addition, the same model demonstrates using the accelerated capacitance-matrix-calculation option Stationary Source Sweep, which is new in version 5.3 of COMSOL Multiphysics. The Stationary Source Sweep study type can also be used with the finite-element-based Electrostatics interface.
]]>
In COMSOL Multiphysics, you can plot two different quantities with different scales and units simultaneously on a 1D plot by adding a second y-axis to your plot group. In this video, we will introduce a case where this functionality is needed, add a second y-axis, and include a legend and annotations to complete the picture.
For this video, we have opened the Axisymmetric Transient Heat Transfer model from the Application Libraries. And you can open this as well by going to COMSOL Multiphysics>Heat Transfer and then selecting heat transient axi. This is a relatively simple heat transfer model in which a temperature of 0° is assigned to the entire domain and then we assign a boundary condition so that the boundaries are 1000°. As time passes, the heat transfers throughout the model and heats up the inner domain. Looking at how this model is set up, we’ve set an initial value of 0°C throughout this entire 2D domain. Next, this axial symmetry condition defines where the symmetry plane is defined, r = 0. And this is used for when we revolve the model. That’s how we get this puck. Finally, we add a temperature of 1000°C to the 3 boundaries on the outside of this rectangle. And then we run the study from 0 to 190 seconds, with a step size of 10 seconds. We have also created a Cut Point 2D. This is simply a point that we define — in this case, at 0.1 in the r direction and 0.3 in the z direction — and we can extract values from this point. We have used this cut point to extract the temperature value at 190 seconds.
We can also create a graph from this cut point, showing all of the times from 0 to 190 seconds. Let’s go to the Results tab and add a 1D Plot Group. We can then add a Point Graph, select Cut Point 2D, extract the temperature as degrees Celsius, and click Plot. Here, you can see that 186° is the last value at 190 seconds and we can see how this model heats up from 0 to 190 seconds. Let’s name this Temperature. We can then add another Point Graph in which we would want to create another value to show. In this case, we’ll select Cut Point 2D again and select the heat flux. We will use the total heat flux in the r-component direction. And then we’ll enter negative for the expression. And this is defining the inward heat flux; that is, how the heat transfers from the right to the left or from the outer boundaries inward to the center of the domain. So we can name this Inward heat flux. And again, we can name the point graph Inward heat flux and plot this. You can see the heat flux, in green, going from 0 to over 18,000 watts per meter squared. You may also notice that the Temperature plot is still plotted on this graph the blue line down here at the bottom. Since the heat flux values are so high, they sort of drown out the lower temperature values.
This is where we would want to create a second y-axis. Let’s go to our 1D plot group and select Two y-axes. We then need to select our plot that we want to add as the secondary y-axis and we’ll choose Inward heat flux. After we check this box, the graph automatically updates; We can see the Temperature scale on the left side and the Inward heat flux on the right side. We could also go into the Temperature node again and say maybe we want to change this unit from degrees Celsius to Kelvin. When we click Plot again, the graph automatically updates and we can see Kelvin on the left side.
Now, we’ve created this graph with two y-axes, but it’s a little difficult to tell which line goes with which axis. We can resolve this by adding legends. At the bottom of this Settings window, you can see that legends are already added. Therefore, we can go into the individual nodes and select to show the legend. We’ll need to rename this Temperature. And then in Inward heat flux, we can show the legend and rename this Heat flux. Here, you can see that the legend has been added with arrows pointing to the axis that the Temperature and Heat flux each correspond to and the color for the lines they each correspond to as well. Let’s just move this to a different position in the middle left so that we’re not blocking any of the information that we want to show.
And finally, this is a pretty good picture, but it might not be clear, again, which line corresponds to which axis and value, so we can add annotations here. We can enter in Temperature for the first annotation; enter a position, which I already know we want to be 80 in the r direction and 350 in the z direction; and we can plot this. You can see it here. This point is a little distracting and doesn’t need to be there, so we’ll get rid of it and then change the color to Blue to match the line. We can do the same thing for the Heat flux: enter in our position, uncheck Show point, select Green, and plot this value. The green is a little off, so we’ll switch to a custom color and then select a more appropriate green from that list.
And that is how you create a plot with two y-axes. First, you need to add both of your graphs. Then, simply check the y-axis button and select which graph you want to show on the secondary y-axis. Finally, you can add a legend and annotations to help people visualize your graph better.
This blog post discusses Integrative Engineering: A Computational Approach to Biomedical Problems, including its content and the intended readers; the purpose and motivation for writing this book; and most importantly, how getting an in-depth understanding of the “intricate machinery” of computational modeling will help make you a better modeler.
Integrative Engineering: A Computational Approach to Biomedical Problems is based on my years of firsthand experience in teaching computational modeling of multidisciplinary problems with the motivation to encourage transdisciplinary learning, integrative thinking, and holistic problem-solving. Hence, the textbook reflects the ideas I have developed over the years in my journey of searching for a practical tactic for updating engineering curricula and making it relevant to our time.
I developed this book in the spirit of encouraging integrative learning, questioning, hypothesis testing, problem-solving, inventing, designing, and prototyping. I also want to encourage learning and researching by linking commonalities across compartmentalized disciplines based on the underlying mathematics, in order to generate novel solutions and cultivate a sense of limitless possibility in engineering research and industrial R&D activities.
In a way, this motivation has something to do with COMSOL Multiphysics®, a software I started using quite some time ago. I still remember my early reluctance toward using this so-called equation-based modeling software. At the time, all of my previous finite element modeling (FEM) experiences gave me an impression that seeing equations is what a program developer needs to do. A program user would not need to concern him/herself with this. COMSOL Multiphysics® was the FEM program that forced me to see the underlying governing equations.
The more equations I see, the more I find commonality in the governing equations for different physical phenomena. This has led me to believe that an integrative engineering approach based on multiphysics modeling is possible. For example, in one chapter of the book, I detail the procedure for the development of differential equations for different engineering problems — including mechanical, heat, mass transport, and wave propagation — to demonstrate that differential equations can be of the same mathematical type, even though they govern problems of different physics. This highlights the fact that for countless real-world problems, we may only need to deal with limited types of governing differential equations.
Because of my adaptation of this equation-based concept, the book is not like any other book on FEM and biomedical applications, although it discusses procedures used in the finite element method. It introduces a computational modeling approach based on facilitating integrative learning through consolidating commonalities in various disciplines and gaining a deep understanding of how this “intricate machinery” of computational modeling operates.
This book is written for undergraduate and graduate students in engineering and applied sciences, as well as practicing engineers in industry and R&D labs. As a valuable resource for finding and formulating solution ideas from complementary fields of engineering, this book will be useful not only for novice modelers, but also experienced simulation engineers.
Examples of computational models for biomedical applications. An image-based modeling example for a denture with and without reinforcing wires (top) and a CAD-based modeling example for a spine fixation device (bottom).
Integrative Engineering: A Computational Approach to Biomedical Problems is structured in four parts:
The textbook aims to lay down some groundwork toward what will be a long journey of restructuring the engineering curriculum with the assistance of a computational-modeling-based investigative tool. In future editions of this book, more integrated problems will be presented and discussed as case studies.
What do I mean by restructuring the engineering curriculum? In most universities, an engineering curriculum is a four-year program. When we talk about encouraging students to embark on transdisciplinary learning, we often encounter the argument that we are producing “jacks of all trades and masters of none”. In this book, readers will learn how an integrative engineering approach assisted by interactive computational modeling can help not only reduce many of the redundancies in the existing curricula, but also provide meaningful visualization of scientific principles at work in real-world applications. Furthermore, readers will also learn the difference between learning how and learning that, which especially helped solidify my vision for a new engineering curriculum. In my belief, it will surely make a big difference in readers’ views toward learning, too.
The idea of going beyond learning that and learning how is key to incorporating relevant humanities content in engineering curricula. Doing so will help train engineers who are not only technically competent, but also conscious of social needs, so that they can innovate not just for technological pursuits, but also for humanity. With an integrative approach, we will be able to do R&D in a much more effective and scientifically guided way, rather than the traditional trial-and-error way.
The book encourages learning how by emphasizing the notion that modeling engineering problems is solving partial differential equations (PDEs) through computational means. It introduces a systematic look at the “black box” of how engineering knowledge is expressed mathematically and examines the ways in which differential equations are solved by computer-based approximate methods:
The book discusses domain discretization using various elements in detail. You will learn about shape function developments using Lagrange interpolation formulas for a full spectrum of Lagrange elements (you will not find this feature in any other books) including 1D bar elements, 2D rectangular and triangular elements, 3D hexahedral and tetrahedral elements, and using Hermite interpolation formulas for beam elements. The origin and meaning of the two interpolation formulas are also included in the book.
Turning integration into multiplication using Gauss quadrature: A case of 3-point Gauss quadrature.
For example, see what different types of discretization mean and how they are related to the order of polynomial interpolation equations:
Shape functions for 6-node quadratic and 10-node cubic triangle elements. Quadratic elements use second-order polynomial equations for domain discretization and cubic elements use third-order polynomial equations for domain discretization.
You will also see the difference between a plane-stress and a plane-strain situation in a 2D solid mechanics problem:
A plane-stress situation (left) versus a plane-strain situation (right).
Other topics include the meaning of the order and integration points and the importance of performing a convergence study with mesh refinement to assure the soundness of the modeling results:
Mesh refinement is crucial for obtaining converged modeling results.
In addition to these important fundamentals, real-world modeling problems are also discussed, like the denture and spine device models pictured in the first section.
Through hands-on experiences gained by problem-solving assignments in the process of learning how, students will see not only the feasibility, but also the practicality of solutions in a holistic way by taking advantage of computational tools. With this approach, real-world problems in a variety of domains, either individually or combined, can be dealt with in a transdisciplinary way.
For COMSOL Multiphysics® users, getting an in-depth look at the “intricate machinery” of computational modeling and its operations will certainly enhance programming skills. To make it more interesting to study, I also strive to add artistic beauty to graphic illustrations throughout the book by creating them myself using mathematical equations with open-source-code LaTex and the companion packages such as TikZ and PGF. This allows me to show the physics and engineering concepts from a teacher’s perspective.
To learn more about Introduction to Integrative Engineering: A Computational Approach to Biomedical Problems or to purchase the book, click on the button below.
Guigen Zhang is a professor and associate chair of the Department of Bioengineering and professor of the Department of Electrical and Computer Engineering at Clemson University. He is a fellow of the American Institute for Medical and Biological Engineering.
Dr. Zhang has been invited to serve on expert review panels by the NIH, NSF, and other U.S. and Canadian agencies on subjects covering nanotechnology, biomaterials and biointerfaces, biotechnology, sensors and biosensors, nanoscale drug delivery, the neurotechnology nexus, and bioengineering research partnerships, among others. He has also been a keynote and invited speaker at numerous national and international professional conferences, including the Venture Conference in Switzerland and the OECD Conference in France. Professor Zhang is also active in serving prominent leadership roles in professional societies. He is currently the executive editor of the Biomaterials Forum of the Society For Biomaterials and president of the Institute of Biological Engineering.
]]>
To demonstrate this functionality, just like in the previous blog post, we will first load the Micromixer tutorial model from the Application Libraries. This model is available in the folder COMSOL Multiphysics > Fluid Dynamics and illustrates fluid flow and mass transport in a laminar static mixer.
The model performs a fluid flow simulation using a Laminar Flow interface. In the next step, it shows how to calculate the mixing efficiency by means of a Transport of Diluted Species interface, using the results from the fluid flow simulation as input. The species will be transported downstream based on the fluid velocity.
The computation time for this model is a few minutes. In the previous blog post, we made the computations go faster by not solving for the Transport of Diluted Species part. However, this time around, we will need the concentration profile throughout the mixer. To run the computation quicker in this case, you can set the Predefined Element Size to Extremely coarse.
For this example, the step of coarsening the mesh is optional and everything that follows will work even without this change.
Let’s now see how to use a parameterized slice plot together with an animation to export an entire series of images, where each image corresponds to a single slice.
This is the default plot for the concentration at 5 different yz-plane slices in the x direction in a solved version of the library model:
You can get a slightly improved visualization by setting the Quality Resolution setting to Extra fine, like so:
Instead of the default 5 evenly spaced slices in the Slice plot for the Concentration, you can change the Plane Data entry method to Coordinates. For example, you can generate a single slice at 0.5 mm, as follows.
The resulting plot is shown in the figure below:
We can parameterize the location of the slices by means of a parameter. Right-click the Results node and select Parameters.
Define a parameter xcut with the value -3.5[mm]. (The microchannel ranges from -3.5 mm to 8 mm in the x direction.)
For the Slice plot, in the section for Plane Data, type xcut in the edit field for X-coordinates.
The corresponding slice plot is shown here:
What if you would now like to export a series of images corresponding to different values of the slice position? For this purpose, you can use a file-export-based animation.
To generate an animation, select File from the Animation menu in the ribbon toolbar.
Alternatively, you can right-click the Export node under Results and select Animation > File.
In the Settings window of the Animation node in the model tree, select Image sequence as the Output type.
For the Filename, type, for example, C:\COMSOL\my_image.png. This assumes that you have a folder C:\COMSOL in your system, but you can write to any folder where you have write permission.
To link the exported file to the parameter xcut, change the Sequence type to Result parameter. This setting is available in the Animation Editing section.
Choose xcut as the Parameter, with Start set to -3.5, Stop set to 8, and Unit set to mm.
At the top of the Settings window for Animation, click Export to start the generation of images. The images will get a suffix corresponding to the number in the sequence. The number of frames, or images, is set in the Frames section.
A series of images is generated with names: my_image01.png, my_image02.png, …, my_image25.png, as shown in the screenshot below.
Let’s now see how the generation of images can be made automatically after solving a model in COMSOL Multiphysics.
To be able to define a sequence of operations under the Study node, we enable Advanced Study Options. This is an available menu option under the Model Builder toolbar. Click the “eye” symbol to see the menu.
Under the Job Configuration node that is now visible, select Sequence. This procedure was described in the previous blog post on how to use job sequences.
In the Settings window for Solution, select All. This ensures that all study steps are run.
Right-click Sequence and select Results > Export to File.
In the Export to File Settings window, for the Run option, select Animation 1. In this simple example with just one node under Export, we could here just as well have left the default option All.
To solve using the Sequence, right-click and select Run. Alternatively, click the Run button at the top of the Settings window.
The previous export operation generated a series of 3D images. What if you want a series of 2D images for each slice? This can be accomplished by using a parameterized Cut Plane.
Right-click the Data Sets node and select Cut Plane.
In the Settings window of the Cut Plane, type xcut for the X-coordinate.
The already existing 3D plot groups are not useful for 2D plots, so right-click Results and select 2D Plot Group.
In the Settings window for the 2D Plot Group, select Cut Plane 1 as the Data set.
Add a Surface plot node under the 2D Plot Group and change the Expression to c, corresponding to the concentration.
To tidy up the list of plot groups, change the name of the 2D Plot Group to Cut Plane Concentration.
Now, go to the Animation node in the model tree. In the corresponding Settings window, change the Subject to Cut Plane Concentration.
Click Export to generate the sequence of 2D images, as shown in the file browser view here:
To get this view using Windows® Explorer, change the view to Large Icons.
Just like in the previous example, you can now go ahead and run the Job Sequence to solve and then have the set of images generated and saved to file — automatically.
To try this example yourself, click on the button above to access the MPH-files.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
]]>
There are four main ways you can generate the geometry for your simulation in COMSOL Multiphysics:
Each of these means of geometry creation provides different opportunities and advantages. The first method enables you to generate your geometry using only the COMSOL Multiphysics modeling environment. This method is the focus of today’s blog post, as we will discuss its associated workflow.
The general steps for creating a geometry include:
Sometimes, it can be more efficient to create geometry primitives in lower dimensions using work planes and then extend them into the dimension that was not initially considered. Work planes can also be used to define cross sections from a higher dimensional entity to a lower dimensional workspace.
Let’s now dive further into the details of using geometry primitives, geometry operations, and work plane operations. Note that these operations can be used for geometries native to COMSOL Multiphysics as well as those created through another CAD program.
COMSOL Multiphysics contains a number of ways in which you can generate the objects for your geometry. One option is to choose an object from the selection of built-in geometric shapes in the software, select and add the primitive object to your geometry sequence, and then edit the template provided through the Settings window. This enables you to specify the exact position, angle, and dimensions of the object as well as to quickly make changes to any of those settings, if needed. Once in the sequence, the object can then be combined and manipulated with other primitive objects to form your final geometry.
Creating and modifying a rectangle using the Settings window in COMSOL Multiphysics.
The types of objects available for you to choose from depend on the spatial dimension of your component. This includes geometric primitives for conventional shapes as well as other less traditional shapes. For 3D models, you can add objects like blocks or spheres as well as torus or helix objects. Similarly, for 2D models, primitives such as rectangles, circles, Bézier polygons, and parametric curves are available.
Another option for generating objects in your geometry sequence, available for 2D and 1D models, is to sketch geometric primitive objects with the mouse.
This is done by:
Immediately afterward, the object you have outlined appears and is added to your geometry sequence.
Creating and modifying a rectangle using the geometry drawing tools.
For 3D models, although you cannot sketch a geometry primitive with the mouse, you can draw a cross section of it in a work plane, which can then be expanded into a 3D object. We demonstrate both options mentioned above in dedicated chapters within the building 2D geometries part of the video series. Additionally, we discuss the advantages of using parameters during this process and demonstrate how they aid in streamlining your geometry setup.
After generating a few objects in your geometry sequence, you can start to combine them in meaningful ways using geometry operations. In the video chapters on building 2D geometries and 3D geometries in our series, we build a rectangular plate containing slots and a grille, respectively. This was done by using a combination of several Boolean and transformation operations.
The rectangular plate with slots (left) and with a grille (right).
The Boolean operations used to create the geometries pictured above include the Union, Intersection, and Difference operations, which enable you to combine objects, create a new object from the intersection of other objects, and subtract objects from one another, respectively. Likewise, the transformation operations used include the Move, Copy, Mirror, and Array operations, which enable you to change the position of objects; create duplicate objects; reflect objects over a plane, line, or point; and create an arrangement of duplicates of another object.
Aside from some of the more conventional geometry features used above, there are also specialized geometry tools used to help create certain types of geometries. Partition operations enable you to split geometric entities such as objects, domains, boundaries, and edges so that you can separate, remove, or simplify the geometry in your model. When we discuss using partition operations for geometries in the video, we demonstrate how to perform this on a helix object as well as the geometry for the shell and tube heat exchanger tutorial model.
A helix geometry split down the middle, from our video chapter on partitioning geometries.
As you continue to build upon your geometry (adding more of these operations and other primitives to your sequence), you’ll notice that your sequence can become quite complex and that making any changes thereafter can become cumbersome. Changing the size of one object in the geometry may require other objects to be resized to accommodate that change. For these and other reasons, we encourage the use of parameters in the geometry operations you use in your sequence. We discuss the reasons for this and demonstrate how with a few of the example geometries built throughout the video series.
COMSOL Multiphysics contains several tools known as work plane operations, which can be used to convert a 2D geometry in a work plane into a 3D object. In the video series, we show and demonstrate the Extrude, Revolve, and Sweep operations.
The Extrude operation enables you to extrude objects from a work plane or planar face to create 3D objects.
The Extrude operation, converting a rectangular plate with holes into a 3D block containing slots. The blue arrow in the center represents the direction in which the shape is extended, which is perpendicular to the work plane.
With the Revolve operation, you can revolve objects from a work plane or planar face about an axis to create 3D objects.
The Revolve operation, converting a circle into a torus. The blue arrows represent the axis about which the shape is revolved.
Finally, there’s the Sweep operation, which enables you to sweep objects from a work plane or planar face along a path to create a 3D object.
The Sweep operation, converting a circle and 2D line path into a pipeline. Two work planes that are perpendicular to each other are used to define the shape and line path separately.
Using these work plane operations (starting from a 2D model and then expanding it into 3D) can be a significantly quicker alternative for building your 3D geometry, as opposed to building it entirely of 3D objects.
The COMSOL Multiphysics software also contains a tool for converting a 3D geometry into a 2D geometry. This is done through using a work plane along with the Cross Section geometry operation. The functionality can be used to simplify your model, among other things, which we discuss in the video series.
The axisymmetric cross section of a light bulb, built in the video chapter on creating 2D models from 3D geometries.
The geometry generated through the Cross Section operation is based on the intersection of your 3D geometry with a work plane. Thus, the 2D geometry you obtain is the result of wherever the work plane cuts through the 3D solids in your model. Within the operation, you can choose for the work plane to intersect (and thus include) all objects or a selection of objects that you specify.
To obtain the appropriate cross section for your analysis, using this functionality may require performing some additional preparation on the original 3D geometry. Sometimes, this means separating and then removing certain parts of your geometry, wherein partition operations can be helpful. We elaborate on this further and demonstrate it in a dedicated chapter within the video series.
Whether you are building a geometry entirely within COMSOL Multiphysics or working off of an external file, you can use the geometry functionality discussed in this blog post to completely customize the composition of your geometry objects. If you are interested in seeing these tools in action, watch our introductory geometry video series:
]]>
We often make modeling decisions based on partial information. Does the flow stay laminar or does it become turbulent? Does a solid stay elastic or does it yield to become plastic under the specified loads and constraints? Are deformations large enough to require a geometrically nonlinear analysis or is small deformation theory good enough? Sometimes, a limit analysis can answer these questions. If we can answer such questions before solving the problem, we can pick the appropriate model. If not, it is economical to solve the simpler model and switch to the more complicated model only if the solution is not valid.
For example, we can do elastic analysis first and switch to elastoplastic analysis only if the maximum stress obtained by the first analysis is above the elastic limit. Similarly, we can solve assuming laminar flow and include turbulence in our model only if the Reynolds number obtained from the first analysis is high enough.
In these and other situations, where we may have to change our modeling approach based on preliminary results, it would save our efforts to automate the workflow. Information obtained from the first study can also be reused to make subsequent studies computationally efficient. Today, we will demonstrate how to do so by writing code to automate an elastoplastic analysis. The code runs an elastic analysis, checks if the maximum stress exceeds the yield stress of the material, and runs the model with plasticity if necessary.
Using the COMSOL Multiphysics® software on the Windows® operating system platform, you can build an app with a customized user interface and include methods for additional functionality. Apps built using the Application Builder in Windows® can then be run on any operating system. You can share the apps with colleagues, customers, students, and more.
In version 5.3 of the COMSOL® software, we introduced a new feature, called a model method, that lets you extend the functionality of the software by writing code inside the COMSOL Multiphysics graphical user interface (GUI), even when you do not intend to make apps. The Record Code, Editor Tools, Language Elements, and Model Expression features of the Application Builder can be used to easily generate the Java® code needed to write a model method.
To add a model method, go to the Developer tab and click Model Method.
To run a model method, go to the Developer tab, click Run Model Method, and choose a method.
In previous blog posts, we discussed how model methods work and demonstrated how to use them to create randomized geometry. Today, we will extend that conversation to physics and study settings.
To demonstrate how to use model methods in physics selection and study sequences, we will use a model based on an elastoplastic analysis of a holed plate from our Application Gallery. The Application Gallery example was set up with prior knowledge that the stress will exceed the elastic limit. As such, plasticity was added in the analysis. Today, we will have a model method find that out and, if plasticity is necessary, incorporate that automatically.
The procedure we want to automate here contains the following steps:
We add two studies and disable plasticity in the first study. In the second study, we add an Auxiliary Sweep for load ramping in the elastoplastic analysis. In the first study, the full load is applied by setting para to 1 in Global Definitions. In the second study, we use the parameters p_low and p_next to make the study efficient. These parameters are going to be set based on the results of the first study.
The second study will be computed only if the assumptions in the first study turn out to be incorrect.
In the Results section, we add a Derived Values node to evaluate the maximum stress from the first study. This could alternatively be done using the Maximum Component Coupling operator. This value, obtained in pascals (as shown in the Settings window for Surface Maximum 1) will be compared to the initial yield stress in pascals. To this end, we introduce the parameter SY_scaled.
SYield and SY_scaled are for the Materials node and the model method, respectively.
A Maximum Component Coupling operator is another alternative for this operation.
Now that we have all of the ingredients we need, let’s write the model method.
A model method that is used to automate an efficient elastoplastic analysis.
Two of the above lines warrant some discussion:
StressRatio
in line 9 is greater than 1, its reciprocal will tell us the load parameter at the elastic limit. Note that we could do so here, as plasticity is the only possible nonlinearity in our problem. The model has no geometric nonlinearity or contact.With this information, the second study (if necessary) is solved only once in the elastic region: at the elastic limit. We can see what values of the load parameter are used in the Results section.
The lowest value of the load parameter, highlighted, is estimated using the elastic study.
If you go back to Global Definitions, you will see that the model method has updated the parameters p_low and p_next from their original values of zero, shown earlier.
Parameter values changed by a model method based on results from a preliminary study.
Today, we have demonstrated setting up efficient physics choices and study sequences using methods. Similar tasks can be accomplished by scripting. However, model methods make it easy to grab the model objects and methods needed by using the same functionality employed in the Application Builder. When needed, these methods can be augmented by regular Java® classes, such as the Math class we used in the first example, or your own classes.
We have only shown one way of performing tasks to illustrate the use of model methods in physics and solver settings, but there are alternatives and refinements. For example, in the elastoplasticity analysis, we added two studies in the Model Builder. Alternatively, you can use a single study where the plasticity and auxiliary sweep features can be enabled or disabled from a model method.
In the examples above, there are logical decisions that have to be made between studies. When there are no such decisions and you just want to refer to one study from another (say, to use one study as a precomputing step for a subsequent study), you can use the Study Reference feature. See the section Study Reference in the COMSOL Multiphysics Reference Manual for details.
If you have any questions related to today’s discussion or using COMSOL Multiphysics, contact us via the button below.
Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Oracle and Java are registered trademarks of Oracle and/or its affiliates.
]]>
As we saw in a previous blog post on creating randomized geometries, you can use the Record Method functionality to record a series of operations that you’re performing within the COMSOL Multiphysics graphical user interface (GUI) and then replay that method to reproduce those same steps. Of course, this doesn’t do us any good if we have already created the file — we don’t want to go back and rerecord the entire file. As it turns out, though, COMSOL Multiphysics automatically keeps a history of everything that you’ve done in a model file as Java® code. We can just extract the relevant operations directly from this code and insert them into a new model method.
The Compact History option.
To extract all of the history of all operations within a file, there are a few steps you need to take. First, go to the File menu and choose the Compact History option. We do this because COMSOL Multiphysics keeps a history of all commands, but we only want the minimum set of commands that were used to generate the existing model. Next, go to File Menu > Save As and save to a Model File for Java file type. You now have a text file that contains Java® code. Try this out yourself and open the resulting file in a text editor. This file always has lines of code at the beginning and end that are similar to what is shown below:
/*example_model.java */ import com.comsol.model.*; import com.comsol.model.util.*; public class example_model { public static Model run() { Model model = ModelUtil.create("Model"); model.modelPath("C:\\Temp"); model.label("example_model.mph"); model.comments("This is an example model"); ... ... /* Lines of code describing the model contents */ ... return model; } public static void main(String[] args) { run();} }
The above code snippet shows us what we can remove. Only the code between the Model model = ModelUtil.create("Model");
and return model;
is used to define all of the features within the model. In fact, we can also remove the model.modelPath();
, model.label();
, and model.comments();
lines. Go ahead and remove all of these lines of code in your text editor and you are left with just the set of commands needed to reproduce the model in a model method.
Next, open a new blank model file, go to the Application Builder, and create a new model method. Copy all of the lines from your edited Java® file into this new model method. Then, switch back to the Model Builder, go to the Developer tab, and choose Run Model Method to run this code. Running this model method reproduces all of the steps from your original file, including solving the model. Solving the model may take a long time, so we often want to trim our model method.
A model method within the Application Builder.
There are two approaches that you can take for trimming down the code. The first is to manually edit the Java® code itself, pruning out any code that you don’t want to rerun. It’s helpful to have the COMSOL Programming Reference Manual handy if you’re going to do this, because you may need to know what every line does before you delete it. The second, simpler approach is to delete the features directly within the COMSOL Multiphysics GUI. Start with a copy of your original model file and delete everything that you don’t want to appear within the method. You can delete the geometry sequence, mesh, study steps, results visualizations, and anything else that you don’t want to reproduce.
Let’s take a look at a quick example of this. Suppose that you’ve built a model that simulates thermal curing and you want to include this thermal curing simulation in other existing models that already have the heat transfer simulations set up.
As we saw in a previous blog post, modeling thermal curing in addition to heat transfer requires three steps:
We can build a model in the GUI that contains just these steps and then write out the Java® file. Of course, we still need to do some manual editing, and it’s also helpful to go through the Application Programming Guide to get an introduction to the basics. But once you’re comfortable with all of the syntax, you’ll see that the above three steps within the GUI can be written in the model method shown here:
model.param().set("H_r", "500[kJ/kg]", "Total Heat of Reaction"); model.param().set("A", "200e3[1/s]", "Frequency Factor"); model.param().set("E_a", "150[kJ/mol]", "Activation Energy"); model.param().set("n", "1.4", "Order of Reaction"); model.component("comp1").physics("ht").create("hsNEW", "HeatSource"); model.component("comp1").physics("ht").feature("hsNEW").selection().all(); model.component("comp1").physics("ht").feature("hsNEW").set("Q0", "-ht.rho*H_r*d(alpha,t)"); model.component("comp1").physics().create("dode", "DomainODE", "geom1"); model.component("comp1").physics("dode").field("dimensionless").field("alpha"); model.component("comp1").physics("dode").field("dimensionless").component(new String[]{"alpha"}); model.component("comp1").physics("dode").prop("Units").set("SourceTermQuantity", "frequency"); model.component("comp1").physics("dode").feature("dode1").set("f", "A*exp(-E_a/R_const/T)*(1-alpha)^n");
The first four lines of this code snippet define an additional set of global parameters. The next three lines add a Heat Source domain feature to an existing Heat Transfer interface (with the tag ht), define the heat source term, and apply the heat source to all domains. The last five lines set up a Domain ODE interface that is applied by default to all domains in the model and sets the variable name, the units, as well as the equation to solve.
Running the model method from the Developer tab.
We can run the above model method in a file that already has a heat transfer analysis set up. For example, try adding and running this model method in the Axisymmetric Transient Heat Transfer tutorial, available in the Application Library in COMSOL Multiphysics. Then, just re-solve the model to solve for both temperature and degree of cure.
Now, there are a few assumptions in the above code snippet:
Of course, as you develop your own model methods, you need to be able to recognize and address these kinds of general logical issues.
From this simple example, you can also see that you can create a model method that acts as a reusable template for any part of the modeling process in COMSOL Multiphysics. You might want to run such a template model method in every new file you create, possibly to load in a set of custom material properties, set up a complicated physics interface, or define a complicated set of expressions. You might also want to reuse the same model method in an existing file to set up a particular customized study type, modify solver settings, or define a results visualization that you plan to reuse over and over again.
Once you get comfortable with the basics of this workflow, you’ll find yourself saving lots of time, which we hope you’ll appreciate!
Oracle and Java are registered trademarks of Oracle and/or its affiliates.
]]>
To demonstrate this functionality, we will first load the Micromixer tutorial model from the Application Libraries. This model is available in the folder COMSOL Multiphysics > Fluid Dynamics and illustrates fluid flow and mass transport in a laminar static mixer.
The model performs a fluid flow simulation using a Laminar Flow interface. In the next step, it shows how to calculate the mixing efficiency by means of a Transport of Diluted Species interface, using the results from the fluid flow simulation as input. The species will be transported downstream based on the fluid velocity.
The computation time for this model is a few minutes. To simplify the model a bit so that we can run the computation quicker, we won’t solve for the species transport. To achieve this, we will make one modification to the Settings window of the second study step, Step 2: Stationary 2, by clearing the Transport of Diluted Species check box.
We can make an additional change to the model in order for it to run faster. Set the Sequence type for the mesh to Physics-controlled mesh and the element size to Extremely coarse.
Now, we can compute Study 1 to make sure everything works. The resulting plot shows the velocity magnitude at a few slices along the mixer geometry.
Here, we will focus our attention on one important part of a job configuration: the Sequence option.
To be able to define a sequence of operations under the Study node, we enable Advanced Study Options. This is an available menu option under the Model Builder toolbar. Click the “eye” symbol to see the menu.
Enabling this setting reveals a hidden Job Configurations node in the model tree. This node is something that you don’t need to worry about during conventional modeling work. It essentially stores low-level information pertaining to the order in which the solution process should be run. Normally, this is controlled indirectly from the top level of a study without the need for enabling Advanced Study Options.
Right-click Job Configurations and select Sequence.
Next, right-click Sequence to see, below the Run option, a variety of options that can be added as an ordered sequence of operations performed when running the sequence:
Job refers to another sequence that is to be run from this sequence, while Solution runs a Solution node as available under the Solver Configurations node, available further up in the Study tree.
Under Other, you can choose External Class, which calls an external Java® class file. Another option, Geometry, builds the Geometry node. This can be used, for example, in combination with a parametric sweep to generate a sequence of MPH-files with different geometry parameters. The Mesh option builds the Mesh node.
Save Model to File saves the solved model to an MPH-file.
Under the Results option, you can choose Plot Group to run all or a selected set of plot groups. This is useful to automate the generation of plot groups after solving. You also don’t have to manually click through all of the plot groups to generate the corresponding visualizations. The Derived Value option is there for legacy reasons and we recommend that you use the Evaluate Derived Values option, which will evaluate nodes under Results > Derived Values. The option Export to File runs any node for data export under the Export node.
Let’s now create a simple sequence. Right-click the Sequence node and select Solution.
The default option for a Solution node in a sequence is to run all solution nodes. The Run option in the General section lets you specify which Solution data structures should be computed. The Solution data structures are available as child nodes, together with other nodes, under Solver configurations. They can be recognized by their short name written within parentheses, such as (sol1) and (sol2). The solution data structures are low-level representations of the solutions.
In this example, you can keep the default All for the Solution data structures.
We would like to save the file when the solver is finished. Right-click the Sequence node and select Save Model to File.
In the Settings window, you can see a number of options that are related to the capability of saving a series of MPH-files with parameters added at the end of the file name. This is very useful for parametric sweeps such as batch sweeps. However, we will not need to do this in such a simple example, so we change the option Add parameters to filename to None. At this stage, we also need to give a file name to a location where we have permission to write. In this example, the file name and path is C:\COMSOL\myfile.mph.
To run these operations, select the Sequence node and click Run.
The library model that we started from already has one defined derived value. You can see this under Results > Derived Values > Global Evaluation. The variable is called S_outlet and is the relative concentration variance at the outlet. It is defined as a variable under Component > Definitions > Variables.
The value of S_outlet is sent to Table 1. We can choose to store this value on file by changing a setting in the Settings window of Table 1. Change Store table to On file and type a file name; for example, C:\COMSOL\my_data.txt.
Now, add an Evaluate Derived Values operation to the sequence.
In the General section, you can change the Evaluate setting to Global Evaluation 1. However, in this simple example model, you can omit this step. Note that the name of the node in the model tree changes to Evaluate: Global Evaluation 1.
You can now run the sequence again. However, for this last step to make sense, you need to enable the Transport of Diluted Species interface in the Settings window for Step 2: Stationary 2.
If you want to run a job sequence from the command line in the Windows® or Linux® operating systems, or macOS, you cannot use the method shown above, but instead you need to add a parametric sweep with a dummy parameter. However, if you were already running a parametric sweep, then all you need to know is that a parametric sweep is just a special type of job sequence and then follow the instructions above, but with a Job Configurations>Parametric Sweep node replacing a Job Configurations>Sequence node.
The reason for this is historical and reflects the evolution of the Study node functionality over time. The operating system command interface doesn’t let you run any part of a Study node that is not controlled at the top level of the Study node. You can only specify which study to run, for example, in the Linux® operating system:
comsol batch -inputfile mymodel.mph -study std1
for Study 1 with tag std1.
You cannot run a sequence in this way, since the top-level study step is unaware of your edits under the Job Configuration node. To make the study step at the top of the Study node tree “aware” of your edits under the Job Configurations node, the easiest way is to add a parametric sweep with an arbitrary parameter defined under Global Definitions > Parameters; say, dummy with value 1. Sweeping over this parameter then adds the extra overhead needed to get a handle on the Job Configuration node from the top level of the Study node. Then, you can issue a command-line batch command to run it.
This is how the corresponding “dummy” sweep will look:
The following figure shows the corresponding sweep over one parameter value for the dummy parameter.
Now, knowing that the Parametric Sweep 1 node is just a special type of Sequence node, the child nodes Solution 1, Save Model to File 1, and Evaluate: Global Evaluation 1 are just as they are in the example above using Sequence.
Enable the display of model tree tags by selecting Tag from the Model Tree Node Text menu, available in the Model Builder toolbar.
The study tag std1 is now visible in the model tree:
The Linux® command shown earlier will now run the sequence of operations that solves, saves the model to file, and finally evaluates the Global Evaluation node. Note that if you only have one Study node in your model, then you can skip the input argument study std1.
Job sequences can be used to automate a number of common tasks after solving a model. In this blog post, we have seen examples of:
There are other tasks that use job sequences that you can try on your own, including:
We hope you find that job sequences are a useful feature for your everyday modeling work!
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus Torvalds in the U.S. and other countries. macOS is a trademark of Apple Inc., registered in the U.S. and other countries.
]]>
We have already discussed how to generate random-looking geometric surfaces by using sum and if operators in combination with uniform and Gaussian random distribution functions. The idea is that by summing up a set of spatially varying waves with careful choices of amplitudes and phase angles, we can mimic the type of randomness frequently found in natural materials and many natural phenomena in general.
To generate synthetic material data in 2D, we can use the same double-sum expression that we used to create randomized CAD surface data in the previous blog post:
Note that this sum could also be used to generate random data for use in boundary conditions on surfaces in a 3D model.
For the 3D volume case, we will need triple sums:
The frequency-dependent amplitudes will take their values from a random distribution according to
and
for the 2D and 3D cases, respectively.
The functions g(k,l) and g(k,l,m) have a random Gaussian, or normal, distribution and h(k,l) and h(k,l,m) are frequency-dependent amplitude functions with values that taper off for higher frequencies in accordance with the spectral exponent β. The higher the value of the spectral exponent, the smoother the generated data will be. For a variety of reasons, many natural processes have this property, which is characterized by slower variations that are more dominant than fast.
The phase angles φ are sampled from a function u. The function has a uniform random distribution that is between and :
φ(k,l) = u(k,l) and φ(k,l,m) = u(k,l,m)
The sums run over a set of discrete frequencies. More nonzero terms in a sum imply a larger number of higher-frequency contributions, resulting in data that contains finer details. The maximum frequencies in each direction are determined by the integers K, L, and M, respectively. These types of sums are closely linked to discrete inverse cosine transforms. They essentially correspond to an inverse cosine transform of the amplitude coefficients g(k,l) and g(k,l,m), with some additional manipulations of the phase angles. For details, see the previous blog post on how to generate random surfaces in COMSOL Multiphysics.
In COMSOL Multiphysics, the following double-sum expression can be entered in various edit fields, such as 2D material properties or 3D boundary conditions:
0.01*sum(sum(if((k!=0)||(l!=0),((k^2+l^2)^(-b/2))*g1(k,l)*cos(2*pi*(k*s1+l*s2)+u1(k,l)),0),k,-N,N),l,-N,N)
We can use a similar expression for the triple sum, which can be used for 3D material data, loads, sources, and sinks:
0.01*sum(sum(sum(if((k!=0)||(l!=0)||(m!=0),((k^2+l^2+m^2)^(-b/2))*g1(k,l,m)*cos(2*pi*(k*x+l*y+m*z)+u1(k,l,m)),0),k,-N,N),l,-N,N),m,-N,N)
where we have set K = L = M = N.
For more details about the underlying theory and syntax used here, see the blog post mentioned above.
Working with models that contain triple sums is computationally quite expensive. It is more efficient to first generate the data and export it to file and then import it again as an interpolation function, perhaps in a separate model. This interpolation function can then be used in a variety of ways, as we will explain later. Alternatively, you can use external software to generate the data by means of inverse FFT.
Let’s now take a look at how to generate 3D material data.
Creating a 3D volume matrix of random data is surprisingly easy. It amounts to creating a couple of random functions, some parameters, a Grid data set, and an Export node.
Start by creating a random function for the amplitudes with 3 input arguments based on a normal, or Gaussian, distribution. This corresponds to the function g(k,l,m) in the mathematical description above. In this case, we arbitrarily use the default settings for a random function with the mean value set to 0 and the standard deviation set to 1.
Next, we create a random function for the phase angles with 3 input arguments based on a uniform distribution between and corresponding to the function u(k,l,m) above.
Now, create a data set of the type Grid 3D that references the random functions as a source. We need this data set to give an evaluation context to the triple-sum expression that we will define later in the Export node.
We will use two results parameters, N and b, for the spatial frequency resolution and spectral exponent, respectively.
To make it easier to work with the large data sets that are generated, you can turn off the Automatic update of plots option. This setting is available in the Settings window of the Results node. Turning it off avoids recomputing the expressions each time you click on a plot group under Results.
To visualize the data before exporting to file, add a Slice plot and type (or paste) the expression:
0.01*sum(sum(sum(if((k!=0)||(l!=0)||(m!=0),((k^2+l^2+m^2)^(-b/2))*g1(k,l,m)*cos(2*pi*(k*x+l*y+m*z)+u1(k,l,m)),0),k,-N,N),l,-N,N),m,-N,N)
To export the data, add a Data node under Export and type in the same expression as for the Slice plot above. In the Settings window of the Data node, make sure to set the data set to Grid 3D and to specify a file name that the data will be written to. Here, we can let the points be evaluated in a way that is independent of the Grid 3D data set.
For the setting Points to evaluate in, select Regular grid. For Data format, select Grid. You can freely choose the number of x, y, and z points to evaluate in. In the figure below, these points have each been set to 50. Note that data generation corresponding to numbers higher than 50 may take a very long time to generate. For a 50x50x50 grid, we already get 125,000 data points.
The text file that is generated and exported can now be imported to a new file for the purpose of setting up a physics analysis where we use the generated data in material properties. This can be done for any type of physics, including electromagnetics, structural mechanics, acoustics, CFD, heat transfer, and chemical analysis. By using the COMSOL® API in model methods or applications, for example, such export and import operations can be automated and set in the context of a for-loop in order to generate statistics over a larger sample set. In this example, we only generate one set of data.
To illustrate how this type of data can be used, let’s create a test model of the simplest possible kind, based on a heat transfer analysis.
Start by creating a new 3D model using a Heat Transfer in Solids interface.
Now, import the data from file as an Interpolation function. This function will be available under the Global Definitions node.
The Interpolation function is given the name cloud and can later be accessed using expressions like cloud(x,y,z).
To make unit handling easy, when using this interpolation function, we will set the input argument units to m and the function unit to 1. The Function unit corresponds to the unit of f(x,y,z)=cloud(x,y,z) and setting it to 1 makes it dimensionless.
To keep things simple, let’s use a Block geometry object that matches the imported data exactly, with the corner at the origin and sides at 1. This corresponds to the size and position of the Grid 3D data set used earlier for generating the data.
For a “real” case, you can instead import or create a CAD geometry, which can be used to truncate the interpolation function in a suitable way. This truncation of data is automatic in COMSOL Multiphysics. The figure below shows such an interpolation of randomized data over a CAD model of a wrench. When evaluating over an arbitrary geometry, it can be useful to scale the coordinate values in the triple-sum. In the wrench example, instead of k*x+l*y+m*z, as in the expressions above, the scaled expression k*(x/0.05)+l*(y/0.05)+m*(z/0.05) is used.
This type of irregular material data may have uses in statistical modeling of materials such as those found in additive manufacturing, where perfect material homogeneity of a 3D-printed component may not always be possible to achieve. The data can be used for any type of material property, such as conductivity, permeability, and elasticity properties, to name a few.
Getting back to our unit cube example, we now add a Blank Material node. We will, somewhat arbitrarily, set the Density to 2000 kg/m^{3} and the Heat capacity to 1 J/kg/K. Since we are performing a stationary analysis, the Heat capacity is irrelevant. The Thermal conductivity is set to the expression 1+2[W/m/K]*cloud(x,y,z). We can see from the earlier Slice plot visualization that the values for the interpolation table are roughly between -0.2 and 0.2. This means that this expression will generate an interesting spatial distribution of thermal conductivity values between about 0.6 and 1.4.
The coefficient 2[W/m/K] is used to assign a consistent unit to the expression. The constant 1 will be automatically converted to the correct unit: [W/m/K].
Let’s define some simple boundary conditions. Set the temperature at the top surface to 393.15[K] and the bottom surface to 293.15[K], corresponding to a 100-K temperature difference.
Now, let’s generate a default mesh.
COMSOL Multiphysics will automatically interpolate values such as those from the material properties to this unstructured mesh from the imported interpolation function. Alternatively, we could generate a swept mesh with hexahedral elements of the same size as the original data, 50x50x50. Such a representation would be more “true” to the original data.
You can experiment with different element orders, such as linear and quadratic types. Unless you use a very fine mesh that “oversamples” the data, the results will depend somewhat on the element order.
Running the Study will produce a couple of temperature plots, the second of which is an Isosurface plot.
Notice how the Isosurface plot looks a little bit jagged, which is due to the underlying irregularity of the material data. We can create another Slice plot to yet again visualize the data. This time, we do so under the guise of thermal conductivity by using the variable ht.kmean, which equals the expression 1+2[W/m/K]*cloud(x,y,z) defined earlier.
Here, the data is sampled at a lower sampling density than the original interpolation function, since we used the default mesh with the Element size set to the default Normal setting. Successively refining this unstructured mesh will ultimately sample the data at more or less the same level of detail as the original synthesized data.
As mentioned earlier, the approach used here for heat transfer is applicable to virtually any other type of simulation. For example, in a porous media flow simulation, the randomized quantity would be permeability rather than thermal conductivity. In the case of porous media flow, a more advanced random distribution may be needed, but let’s save that discussion for a future blog post.
We can also use the synthesized data in a different way: by using Boolean expressions to convert it to binary data. This method can be used for simulating two or more materials where the material interface is randomized and the material properties change abruptly from one point to another. COMSOL Multiphysics will automatically handle the sharp interpolations needed for this case.
The following picture shows a visualization of the Boolean expression cloud(x,y,z)>-0.03, which evaluates to 1 at points where the inequality is true and 0 at the other points.
To get a nicer plot, you can set the resolution of the Slice plot to Extra fine. This setting is available in the Quality section of the Settings window for the Slice plot.
We would now like to use this type of binary information in a simulation. It can be interesting, for example, to use it in a heat transfer simulation to see the so-called percolation effects. For certain threshold values, you get a large connected component in the material so that the entire slab of material starts conducting much more efficiently.
To try this, change the expression for the thermal conductivity to 1-0.9[W/m/K]*(cloud(x,y,z)>thold), where thold is a global parameter. Start by defining thold in Parameters under Global Definitions.
Then, change the material data accordingly.
For each point in space, the Thermal conductivity will, in a binary fashion, evaluate to 1 or 0.1, depending on the value of the inequality.
Now, let’s see how different values of this Boolean threshold will affect the simulation. For this purpose, run a parametric sweep over the parameter thold from -0.2 to 0.2.
Add a Surface Integration node under Derived Values to integrate the total heat flux that goes through one of the surfaces. This will be given by the surface integral of -ht.ntflux or +ht.ntflux, depending on if you are integrating over the top or bottom surface. In the figure below, we used the top surface.
The resulting Table plot shows the amount of heat power transferred (in watts). We can see that for threshold values around 0, the conductivity rises quickly from a low value to a high value. This is due to the sudden appearance of one or more large connected components where the expression 1-0.9[W/m/K]*(cloud(x,y,z)>thold) evaluates to 1.
The figures below show a Volume plot with a Filter attribute for three threshold values around 0. The filter shows the parts of the domain for which cloud(x,y,z)<thold corresponds to the locations of higher conductivity.
We can see from these figures how the highly conductive parts start connecting for the higher threshold values.
The corresponding Filter settings are shown in the figure below.
A similar type of percolation effect, seen here for binary data, is also happening for the continuous data case shown earlier. However, when using binary data, the effects are easier to see.
Finally, let’s look at an alternative way of visualizing this type of random data. We will visualize the data set using a large number of randomized points (or rather, small spheres) and let the radius and color of the points vary according to the interpolation function cloud(x,y,z). In addition, we will only allow the points to be visualized for positive values of cloud(x,y,z). This technique will allow us to “see inside the data” in a way that is difficult to achieve using other methods. Note that this visualization technique works for any type of data, including real measured data.
Start by generating three random variables with uniform distribution, with the Range set to 1 and the Mean value set to 0.5.
To generate this type of plot, we use a Scatter Volume plot type. This is available by right-clicking a 3D Plot Group and selecting More Plots > Scatter Volume.
In the Settings window of the Scatter Volume plot, set the expression for the X-, Y-, and Z-components: rn1(x), rn2(x), and rn3(x), respectively. Here, we are using the x-coordinate in an unusual way, in that we are using it merely as a long vector of arbitrary values.
Next, in the Evaluation Points section, set the Number of points for the X grid points to 100,000; 1,000,000; or more, depending on how many points your computer can handle. Set each of the Y grid point and Z grid point values to 1. This is a trick for getting a long vector of values that we can feed into the random functions in order to generate a lot of random points within the unit cube.
To make the plot appear as in the above figure, go to the Radius section and set Expression to cloud(rn1(x),rn2(x),rn3(x)) and the Radius scale factor to 0.3. In addition, in the Color section, set the Expression to cloud(rn1(x),rn2(x),rn3(x)) and the Color table to GrayScale.
One additional noteworthy fact about this plot: negative values will be ignored. This helps our visualization, since roughly half of the generated data is negative and we can more easily see through the data and get an intuitive feel for the variations. This method will only work for a rectangular block. To instead generate this type of plot over an arbitrary CAD geometry, you can use the Particle Tracing Module, which allows you to generate random points inside any type of CAD model.
By the way, a similar-looking plot can be achieved in a 2D model by simply creating a 2D Surface plot using a double-sum expression, as shown in the figure above.
]]>