As engineers, researchers, and scientists, we are always striving to come up with improved designs. *Optimization* is the idea of altering model inputs, such as part dimensions and material properties, with the objective of improving some metric, while also considering a set of constraints. The Optimization Module in COMSOL Multiphysics is a useful tool for addressing such problems.

Dimensional optimization is one of the more common optimization techniques. The approach involves changing CAD dimensions directly to minimize mass, as illustrated in our Multistudy Optimization of a Bracket tutorial. In the bracket example, we use so-called *gradient-free techniques* to adjust dimensions and consider constraints on the relationships between the dimensions, the peak stress, and the lowest natural eigenfrequency. These techniques are very flexible in the type of objective functions and constraints that can be addressed. However, one drawback to these techniques is having to remesh the part repeatedly to numerically approximate the sensitivities of the objective function and constraints with respect to the design variables.

As we have previously discussed on the blog, it is also possible to analytically compute the design sensitivities due to geometric changes when using the *Deformed Geometry* interface. Further, the gradient-based solvers can use the sensitivities to optimize the dimensions of a part without remeshing — an element that we highlighted in the design of a capacitor. It is helpful to review the two blog posts referenced here to understand the functionality that we will use today.

Shape optimization is an extension of the previously developed concepts, and it considers not just straightforward dimensional changes, but general changes in shape as well. The shape of the structure is controlled via a set of design parameters that use a set of basis functions, which can describe quite arbitrary shapes. Let’s take a look at an example.

We begin with a classical shape optimization problem: adjusting the thickness of a cantilevered beam to minimize the mass, while maintaining a constraint on the peak deflection of the free end. The beam of initially uniform thickness has a nonuniform load distributed over its top surface, as shown in the diagram below.

*A cantilevered beam with a nonuniform load applied. Point A should not deflect more than a specified value. The mesh is also shown.*

First, we want to choose our design variables. Both the length of the beam and the thickness at the cantilevered end are fixed. What we can vary is the thickness of the beam along the length. It is somewhat simpler to vary the change in the thickness from the initial configuration. Instead, we introduce a function, DT(X), which is initially zero along the length.

*The optimization problem studies a change in the thickness of the beam.*

Here, we choose to represent the change in thickness via a set of Bernstein polynomials of the fourth order:

DT(\bar X)=T_0\left[C_0 (1-\bar X)^4 + C_1 \bar X (1-\bar X)^3 + C_2 \bar X^2 (1-\bar X)^2 +C_3 \bar X^3 (1-\bar X)^1 +C_4 \bar X^4\right]

expressed in terms of the normalized dimension: \bar X = X/L_0. Note that the function is scaled such that the polynomial coefficients have an order of magnitude near unity. Again, we do this for scaling purposes.

Since the thickness of the beam at X=0 is specified, DT(\bar X)=0, fixing C_0=0 so that the term can be omitted. At the far end, we add the constraint that the beam cannot become too thin: C_4<0.9.

In the intermediate region, we would also like to add some constraints to further limit the design space. We could add the constraint that 0 < DT(\bar X) < 0.9T0. The constraint, however, has a drawback: It would allow the thickness of the beam to oscillate and, from first principles, we know that this is not reasonable. There is no advantage to having the thickness of the beam *increase* along the length. We can instead add a constraint on the derivative: DT^\prime(\bar X) > 0. This forces the thickness of the beam to change monotonically along the length and has the added advantage of naturally satisfying the constraint: 0 < DT(\bar X) < 0.9T0.

There is one more constraint to consider: the displacement of the point at the end of the beam. We want the magnitude of the displacement of point A to be less than some specified value, u_{max}. With such information, we now have a complete optimization problem:

\begin{aligned}& \underset{C_1 \cdots C_4}{\text{minimize:}}& Mass/M_0 \\& \text{subject to:}& C_4 < 0.9 \\& & |\mathbf{u}_A/u_{max}| < 1\end{aligned}

Here, the normalization of the objective function with respect to the initial mass of the beam, M_0, is done to scale the objective function so that it is of order unity. Similarly, the magnitude of the displacement of the beam tip, |\mathbf{u}_A|, is normalized with respect to the peak permissible displacement, u_{max}. The normalized displacement should be less than one. Let’s now look at implementing such a problem in COMSOL Multiphysics using the Optimization Module.

We can begin with our initial design, simply a beam of fixed length and uniform thickness. The design is cantilevered at one end, with a nonuniform load across the top face that varies as \bar X^4(1-\bar X ). We want to first introduce the change in the thickness function. The polynomial function described earlier is the variable **DT**, as shown in the screenshot below. The expression **Xg** refers to the *x*-dimension of the original, undeformed geometry. The derivative of this function, with respect to the normalized *x*-direction, is the variable **dDTdX**. Two *Global Parameters*, **L0** and **T0**, define the length and maximum thickness.

*A screenshot showing the change in the thickness function and its derivative.*

The change in the thickness variable is used within the *Deformed Geometry* interface to define how the entire volume of the beam is altered with a change in thickness. Since it is only the thickness that changes, a simple linear mapping can be used, as illustrated below.

*The displacement within the beam is completely prescribed.*

We can now set up the optimization problem via the *Optimization* interface. The interface provides an easy way of setting up more complicated optimization problems with several constraints. The relevant settings are shown in the screenshots below, starting with the objective function. The *Integral Objective* feature integrates the material density over the modeling domain and normalizes with respect to the initial part mass.

*The optimization objective is to minimize the mass.*

The settings for the *Global Control Variables* feature are shown below. The four variables, **C1**, **C2**, **C3**, and **C4**, have an initial value of zero, which is equivalent to the initial beam shape. The constraint on **C4** is imposed as an upper bound and the scaling of all variables is unity.

*The definitions of the control variables, their bounds, and scaling.*

Next, we apply the *Pointwise Inequality Constraint* feature to the bottom boundary of the domain. This feature enforces that the derivative of the displacement function remains positive at every point, thereby ensuring a monotonically increasing function.

*The constraint on the derivative along the length of the beam is enforced via a pointwise inequality constraint.*

Finally, the peak displacement of the point at the far end of the beam is constrained so that it is below a maximum specified value. This value is set via the *Point Sum Inequality Constraint* feature.

*The implementation of the constraint on the normalized peak displacement.*

Our optimization problem is now almost entirely set up. The only remaining step is to add an *Optimization* feature to the study sequence and to select the gradient-based SNOPT solver, which proves to be the fastest approach to our problem. All other settings can be left at their default values. The objective function and constraints are automatically taken from the *Optimization* interface.

*The relevant optimization solver settings.*

The results are depicted in the image below. The optimal shape within this basis has been identified. The displacement at the tip is at its maximum value, with the thickness monotonically changing along the length. Due to the expected deformation of the geometry, a mapped mesh was used.

*The optimized shape of the beam, which minimizes mass for the applied nonuniform load and constraints. The displacement field is plotted along with the applied load distribution and the mesh.*

We may ask ourselves how we know that the above structure is truly optimized. There is always the urge to perform a mesh refinement study, trying out finer and finer meshes to see how the solution converges. It is also reasonable to study convergence with respect to basis functions. We can use a higher-order Bernstein basis function and compare the results. This, however, can lead to a problem known as *Runge’s phenomenon*, along with slow convergence.

We can circumvent such issues by subdividing the original interval into multiple subintervals, using different lower-order shape functions within each interval (a piecewise polynomial). Other basis functions beyond the Bernstein basis can also be applied, such as the Chebyshev polynomials and the Fourier basis. The Optimizing the Shape of a Horn tutorial, available in our Application Gallery, features an example of the latter instance.

The cases discussed here include quite simple deformations. When considering more complicated deformations, you will need to put more effort into defining the deformation mapping. For very complicated deformations, it is also useful to add helper equations in order to compute the deformation.

If you have any questions about using these shape optimization techniques or are interested in adding the Optimization Module to your suite of modeling tools, please contact us.

]]>

Let’s begin by referring back to a previous blog post on the computation of design sensitivities that shows how we can use the *Deformed Geometry* interface and the Sensitivity study to analytically evaluate the design sensitivities of a parallel plate capacitor. For that problem, we computed the change in the capacitance with respect to geometric and material property variations. We also computed the design sensitivities to geometric changes without altering the geometry or performing any remeshing. We now want to use that same framework to change the geometry of our capacitor with the objective of minimizing a particular quantity by using the functionality of the Optimization Module.

We start with a simple extension to the previous example: a parallel plate capacitor of side length L=1\ m with two dielectrics, \epsilon_{r1}=2, \epsilon_{r2}=4, each of the same initial thickness, T_0=0.1\ m. Further, we will neglect all fringing fields. This lets us model only the region between two parallel plates so that our computational model looks just like the figure shown below.

*Schematic of a parallel plate capacitor with two dielectrics between the plates. Fringing fields are ignored.*

This model can be built by sketching two geometric blocks of dimensions as described above. The *Electrostatics* physics interface allows us to apply a voltage and a *Ground* condition at the top and bottom faces as well as apply material dielectric properties. It is possible to compute the capacitance by integrating the electric energy density, as described in the previously mentioned blog post, and get a value of C_{computed}=118\ pF.

Now, let’s suppose that we want to design a 100 pF capacitor by changing the thicknesses of the two layers, without altering the overall dimensions of the device. This can be posed as an optimization problem:

\begin{aligned}& \underset{dT}{\text{minimize:}}& & \left( \frac{C_{computed}}{100\ pF}-1\right)^2 \\& \text{subject to:}& & -T_0 > dT >T_0\end{aligned}

That is, we want to get the capacitance to be as close to 100 pF as possible by varying the change in the dielectric thicknesses within limits such that neither dielectric is of zero thickness. The design parameter dT is the changing in the thicknesses of the two layers, as shown above. The objective function itself is formulated such that the absolute magnitude is on the order of unity. For numerical reasons, this form is preferred over (C_{computed}-100\ pF)^2 or the absolute value function: |C_{computed}-100\ pF|.

We can begin by defining the variation of the thickness of the dielectric layers using the *Deformed Geometry* interface. The *Deformed Geometry* interface is necessary because we want to compute the analytic sensitivities without having to remesh the geometry as we change the dimensions. Since we will be changing the sizes of the two dielectrics, we want to define these deformations as completely as possible. We will do this with a *Prescribed Deformation* domain feature, as shown in the screenshot below.

The capacitor itself is originally sketched such that it is centered at the origin so the original, undeformed part has a coordinate system: **(Xg,Yg,Zg)**. For this simple Cartesian geometry, we can use this coordinate system to directly define the deformation as the thicknesses of the dielectric layers are changed. The deformations of the bottom and top layer are **dT*(1+Zg/T0)** and **dT*(1-Zg/T0)**, respectively, where **dT** and **T0** are *Global Parameters*.

*The change in the thicknesses of the dielectric layers is controlled with a* Prescribed Deformation *feature.*

Next, let’s look at the optimization user interface. For this simple problem, we can just add an *Optimization* feature to our study sequence, as shown in the screenshot below.

This minimization problem statement and scaling can be implemented entirely within the *Optimization* study node, as shown in the screenshot below. The relevant settings are the *Objective Function*, which is the expression **(C_computed/100[pF]-1)^2**, and the *Control Parameter*, **dT**, which has an initial value of **0[m]**. The upper and lower limits are specified to prevent zero, or negative, thicknesses. Lastly, we apply a scaling to dT, the design parameter, based upon the original thickness, D, such that the optimized value will have an order of magnitude near unity.

*The optimization solver settings.*

The SNOPT method is used to solve the optimization problem. Both the SNOPT and MMA methods use the analytically computed sensitivities, but SNOPT converges the fastest to within the default tolerance of 1E-6. The resultant device capacitance is 100 pF and the thickness of the dielectrics is D_{1} = 0.1542 m and D_{2} = 0.0458 m. The voltage field in the original model and the optimized structure are shown below, along with the finite element mesh. Observe that the finite element mesh is stretched and compressed, but that no remeshing has occurred.

*The original and final structure. The voltage field and mesh are shown.*

We’ve looked at a fairly straightforward example of shape optimization, although with a little more effort, we could have found the solution to this problem by hand or by performing a parametric sweep. The geometric deformation demonstrated here is also quite simple. As you consider more complex geometries and more complex geometric changes, you will not always be able to directly use the undeformed spatial coordinates to define the deformation. In such cases, you will need to add equations to the model to help define the deformation. Of course, you may also want to consider more complex deformations, not just simple dimensional changes. We will cover this topic in an upcoming blog post on optimization.

In the meantime, if you have any questions about this technique and would like to use the Optimization Module for your design needs, please contact us.

]]>

Since 3D printing first emerged as a technology, its impact has been felt across a wide range of industries. Take automotive manufacturers, for instance. 3D printing capabilities have extended the freedom of vehicle design, enabling the development of highly customized parts while saving production time and costs. Within the medical field, the technology has led to greater innovation in the design of customized implants and devices as well as the creation of exact replicas of organs.

*A 3D printer creates a product component.*

Additive manufacturing, as we can see, is already making its mark as an efficient approach to product development. As the technology continues to advance, new freedom and flexibility is emerging, furthering its applications. One particular area of promise is the field of material design.

Material design typically tailors fine-scale structures organized in a repeating pattern to create a product with optimal performance. Depending on the requirements, the design of a single microstructure, otherwise known as a unit cell, can range from a simple monolithic geometry to a complex multimaterial geometry. In theory, the complexity, and thus the design freedom of the unit cell, is only limited by the creativity of the designer and the manufacturing capabilities.

Traditionally, 3D printing has only been capable of printing a single material product. Now, recent developments in 3D printing are showing potential for the multimaterial printing of small-scale structures. Such capabilities would provide designers with a finer control over the microstructures, with the option to combine and customize the microstructures based on their specific needs. Further, engineers would be able to select the proportion and arrangement of the individual materials included in the structure.

Researchers at TNO are using multiscale modeling and multiphysics simulation to investigate virtual material design in 3D printing. In the next section, we’ll take a closer look at their innovative research.

To begin their simulation studies, the team first designed a single unit cell with twice the stiffness in one direction as the other and analyzed the material behavior for a given geometry. The optimization capabilities in COMSOL Multiphysics allowed them to derive the appropriate value of stress corresponding to an applied strain, so as to fit their desired stiffness matrix. The simulation results were verified with a printed sample tested for the expected material behavior.

*Left: Unit cell geometry. Center: Mechanical stress for the optimized design. Right: The 3D-printed samples.*

The researchers then performed a similar study on a highly anisotropic material. In the simulation, both the spatial distribution of the material and the orientation of the anisotropic fibers could be controlled.

Speaking to the research team’s greater goal, the simulation was extended to microstructures made up of a combination of different materials. The composition and arrangement of the various materials in the structure were adjusted until the optimal level of thermal conductivity was achieved.

*The multimaterial composition for the ideal anisotropic thermal conductivity. The color white indicates regions of high conductivity, orange represents regions of low conductivity, and red depicts nonconductive material and voids.*

After optimization at the microlevel, the team at TNO focused on optimization for objects of a larger scale. While a necessary step for developing actual products, extending the results of the microstructure simulations to real-life sizes can be quite computationally expensive. Multiscale modeling provides a solution, allowing designers to simulate at the micromaterial and product scales simultaneously. The team extracted parameters for the effective structural behavior of several multimaterial cells, which could then be applied as input in the full-scale model of a device.

While topology optimization is a powerful tool for creating designs for 3D printing, there are also limitations to address for specific additive manufacturing techniques. In selective laser melting (SLM), for example, powdered material is melted into a desired shape via a laser beam. The powder that is not used must be removed from the printed object afterward, and large overhangs are not conducive to SLM designs as they have the tendency to warp.

To address such issues, the research team designed unit cells featuring different densities and then combined them to create the desired properties at the product level. Various techniques were combined in COMSOL Multiphysics, from stiffness homogenization for individual unit cell types to topology optimization at the product scale. The procedure as a whole was later applied to the development of a polymer hammer handle, shown below. The design was comprised of different unit cell types at the microlevel and optimized for proper stiffness and minimal material use.

*Left: Topology optimization simulation results. Center: Optimized hammer handle design. Right: Pattern containing multiple cell types. The densest cells, with small holes, are located near the top, and the least dense cells are located near the bottom. A few intermediate shapes can be found in between these areas.*

As the researchers at TNO demonstrated, the current limitations of additive manufacturing can be effectively addressed with multiscale modeling and multiphysics simulation. Such advancements help to extend the power and reach of 3D printing beyond conventional techniques. This enables the development of new and more complex products, further spreading the benefits of 3D printing technology.

- Read more about TNO’s simulation research on page 22 of
*COMSOL News*2015 - Browse additional blog posts on the topic of 3D printing

Healthcare-associated infections are a serious concern for patients admitted to hospitals worldwide, in developed and developing countries alike. According to the World Health Organization (WHO), *hundreds of millions of patients are affected globally each year*. They further suggest that at any given time, seven out of a hundred patients in developed countries and ten out of a hundred patients in undeveloped countries are infected at least once during their hospital stay. This is in addition to the health reason that brought them into the hospital in the first place. Think about that for a minute.

*“UVa Medical Center” by cvilletomorrow — flickr. Licensed under CC BY 2.0 via Wikimedia Commons.*

So why are people getting more sick when seeking treatment and how can we stop it? There are various factors that promote HAI. Perhaps the most obvious culprit is direct contact with the infected source (person, equipment, furniture, etc.). However, as Alireza Kermani of Veryst Engineering pointed out at the COMSOL Conference 2015 in Boston, another culprit may be inefficient ventilation systems. To validate their theory, Veryst turned to CFD software.

Using the COMSOL Multiphysics simulation platform and the CFD Module, Veryst was able to analyze the airflow pattern in a hospital room and investigate how airborne bacteria is dispersed throughout the room.

The clean room model used for evaluation included people (patient and doctor); furniture (bed, wardrobe, and lamp); medical equipment; and a ventilation system (inlet and exhaust). The patient in question is coughing, thus spreading bacteria particles into the room. The model accounts for both forced and natural ventilation, with a ventilation rate of six air change per hour (ACH). This is the ASHRAE standard 170 ventilation rate for healthcare facilities.

*Layout of the hospital clean room. The room is thermally isolated on three sides and its base. The ceiling and the wall opposite of the wardrobe exchange heat with the outside. Image taken with permission from Veryst’s paper titled “CFD Modeling for Ventilation System of a Hospital Room“.*

Veryst included the following heat sources in their model:

- The bodies of the doctor and patient (60 W/m
^{2}) - Equipment (100 W/m
^{2}) - Lamp (200 W/m
^{2}) - Ventilation inlet (20°C)
- Outside temperature (5°C)

They found that the average temperature of the clean room was 21°C. Here are their simulation results, depicting air flow and temperature distribution in the hospital room:

*Temperature distribution and velocity vectors in a hospital clean room. Image credit: Veryst.*

When designing a ventilation system that is good at preventing airborne infections, it’s also important to make sure that the design does not result in an uncomfortably hot or cold room. Of course, thermal comfort is highly subjective, but there is a way to quantify this qualitative metric. When evaluating the ventilation system design, Veryst relied on the ASHRAE Standard 55-2013 thermal sensation scale for quantifying thermal comfort:

Sensation | Predicted Mean Vote (PMV) Value |
---|---|

Hot | +3 |

Warm | +2 |

Slightly warm | +1 |

Neutral | 0 |

Slightly cold | -1 |

Cool | -2 |

Cold | -3 |

*ASHRAE thermal sensation scale. An acceptable indoor value is between -0.5 and +0.5. Table adapted with permission from Veryst’s paper titled “CFD Modeling for Ventilation System of a Hospital Room”.*

Then, they calculated the PMV and predicted percentage of dissatisfied (PPD) patients for their simulation results and compared them with the scale. They found that with the current design and their assumptions about the patient’s metabolic rate and what (s)he is wearing, 56% of people would be dissatisfied because they would feel cool (PMV = -1.59).

Finally, Veryst was able to quantify the percentage of bacteria that is leaving the hospital room through the exhaust of the ventilation system with COMSOL Multiphysics and the Particle Tracing Module.

*Plots showing the motion of bacteria particles at 30, 60, 180, and 230 seconds after the patient coughs. The particle color corresponds to velocity (m/s). Taken with permission from Veryst’s paper titled “CFD Modeling for Ventilation System of a Hospital Room”.*

As we can see in the plots above, after the patient coughs, the bacteria is spread throughout the room via the ventilation system. Most of the bacteria particles leave the room between 30 and 35 seconds after the cough. However, even after 300 seconds, 8% of the bacteria is still inside the room, which could cause HAI.

*Animation courtesy of Veryst.*

To improve the ventilation system so that less bacteria stays in the room and the system is more energy efficient, while falling inside the acceptable thermal comfort range, you could run further simulations to optimize the design. Get the full details of Veryst’s research in their “CFD Modeling for Ventilation System of a Hospital Room” paper.

- New to COMSOL?
- See what our CFD software looks like in under 3 minutes — watch this video
- Try before you buy: Attend a free workshop and receive a 2-week trial of COMSOL Multiphysics

- Need some inspiration?
- Download the .mph files for presolved CFD tutorials from the Application Gallery to get started

When it comes to counting particles, the easiest way to do so is directly within postprocessing, after the solution has been computed. Let’s walk through the steps behind this basic methodology.

First, create a duplicate of the default Particle Data Set, which is automatically generated after computing the solution. Add a Selection to the data set, as illustrated below, and then select the domains or boundaries in which you will count the particles.

Next, add a Global Evaluation node under Derived Values and point to the new Particle Data Set, in this case, *Particle 2*. You can select specific parameter values or output times in the settings window.

Choose from the following predefined expressions under the *Particle statistics* section in the Add/Replace Expression menu:

- <
`phys`

>`.alpha`

— Transmission probability (the number of particles in the selection specified by the Particle Data Set divided by the total number of particles). - <
`phys`

>`.Nsel`

— Total number of particles in selection (the number of particles in the selection specified by the Particle Data Set). - <
`phys`

>`.Nt`

— Total number of particles (the total number of particles in the entire model).

*Evaluating the Global Evaluation node will display the value of the expression in a results table.*

- Molecular Flow Module > Industrial Applications > charge exchange cell
- AC/DC Module > Particle Tracing > quadrupole mass filter

If the number of particles or the number density of particles needs to be used in another physics interface, the best option is to use an *Accumulator*. Accumulators transfer information from particles to the mesh elements in which they reside. They are available on both domains and boundaries and can be accessed from the context menu of any *Particle Tracing* interface. Upon adding an accumulator to a domain, the following settings are shown:

The available options in the *Accumulator* feature are:

*Accumulator type*: When set to*Count*, the accumulated variable is simply counted in each mesh element, unaffected by the element size. For*Density*, the accumulated variable is divided by the volume of the mesh element, allowing you to compute quantities like the number density of particles.*Accumulate over*: When set to*Elements*, the accumulated variable is simply the sum of the source terms for all of the particles that reside in the element at a given point in time. When set to*Elements and time*, the particles leave behind a contribution in the elements that they pass through, based upon how long they were in each element.*Source*: This is the expression defined on the particle that you want to project onto the underlying mesh. When counting particles,*Source*is simply set to “1″, but it can be any expression that exists on the particles, such as charge or kinetic energy. It can also depend on variables that are defined in the domain in which the particles are located.*Unit*: When the unit is selected for the accumulated variable, the required unit for the*Source*will change accordingly.

To count the total number of particles, you can add an integration component coupling to the domain where the accumulator exists. Boundary accumulators automatically add component couplings on the selected boundaries. In our example, the total number of particles is then given by <`integration_operator_name`

>`(pt.count)`

. This can be evaluated using the Global Evaluation node. The number of particles within each mesh element may also be coupled to other physics, since it is a degree of freedom. We can visualize how the particle counting works by plotting particle locations on a plot of the accumulated variable and the underlying mesh.

*A plot shows the particle locations (black dots) on top of the underlying mesh (gray lines). The color in each element represents the value of the accumulated variable.*

From the plot above, it is clear that the accumulator does indeed count the number of particles within each mesh element. For mesh elements in which there are no particles, the accumulated variable is zero, indicated by the mesh elements (shown in blue). Most mesh elements have one particle, which is highlighted in green. However, one mesh element happens to contain 2 particles (shown in red).

You can also use accumulators to count the number of particles passing through an interior boundary. To do this, simply add a Wall condition on the boundary that the particles will pass through, setting the Wall condition to *Pass through*. Add an Accumulator subfeature to the Wall node with the following settings:

When a particle passes through the boundary, the accumulator increments the degree of freedom in the corresponding boundary mesh element. This gives the spatial distribution of the number of particles passing through the interior boundary, as depicted in the animation below.

It is possible to conveniently plot the total number of particles passing through the boundary as a function of time. Simply add a *1D Plot Group* and a *Global* plot feature. The accumulator creates predefined variables to add up the accumulated variables over all the mesh elements. To get the total number of particles, you can use the *Sum of accumulated variable count* option.

The plot below shows the results for the total number of particles that crossed the interior boundary.

Note: To learn more about the applications of accumulators, you can refer to this earlier blog post by my colleague Christopher Boucher.

- Molecular Flow Module > Benchmarks > s_bend_benchmark

A *Particle Counter* is a domain or boundary feature that provides information about particles arriving on a set of selected domains or surfaces from a specified release feature. Such quantities include transmission probability, current, and mass flow rate. The settings for a *Particle Counter* feature are very simple. Select a *Release feature* to connect to, or select *All* release features. You can add Particle Counter features to the model and access their variables without having to recompute the solution. Simply use *Study > Update Solution*, and the new variables will be automatically generated and immediately available for evaluation.

Each Particle Counter generates the following expressions. Note that the scope is different than the variables that are always available in the *Particle statistics* plot group, as outlined in the first section.

- <
`phys`

>`.`

<`feature`

>`.rL`

— Logical expression for particle inclusion; can be used in the*Filter*node of the*Particle Trajectories*plot, allowing visualization of only the particles that connect a source and a destination. - <
`phys`

>`.`

<`feature`

>`.Nsel`

— Total number of particles in selection; computes the total number of particles released by a specific release feature in the set of domains or boundaries determined by the Particle Counter selection. - <
`phys`

>`.`

<`feature`

>`.Nfin,`

— The number of transmitted particles at the final time (the number of particles in the Particle Counter selection at the final solution time). - <
`phys`

>`.`

<`feature`

>`.alpha`

— Transmission probability (the ratio of the number of particles in the Particle Counter selection divided by the number of particles released by the release feature).

When the Release feature is a *Particle Beam* feature — a specialized release feature for the *Charged Particle Tracing* interface — additional variables for the average beam position, velocity, and kinetic energy are generated for the particles that connect the counter to the particle beam.

- Particle Tracing Module > Charged Particle Tracing > sensitive high resolution ion microprobe
- Particle Tracing Module > Tutorials > brownian motion
- Particle Tracing Module > Fluid Flow > laminar mixer particle

There are three ways to count the number of particles on domains and boundaries. For simple models in which only a single release feature is present, the postprocessing tools might suffice. If you want to plot the number of particles on a domain or boundary, or if you want to use the number of particles in another physics interface, accumulators are the answer. To count particles that only connect a specific release feature to a selection of domains or boundaries, you can use the Particle Counter feature — one of the many new additions in COMSOL Multiphysics version 5.2.

]]>Think about the first architects who designed a bridge above water. The design process likely included several trials and subsequent failures before they could safely allow people to cross the river. COMSOL Multiphysics and the Optimization Module would have helped make this process much simpler, if they had computers at the time, of course. Before we start to discuss building and optimizing bridges, let’s first identify the best design for a simple beam with the help of topology optimization.

In our structural steel beam example, both ends of the beam are on rollers, with an edge load acting on the top of the middle part. The beam’s dimensions are 6 m x 1 m x 0.5 m. In this case, we stay in the linear elastic domain and, due to the dimensions, we can use a 2D plane stress formulation. Note that there is a symmetry axis at x = 3 m.

*Geometry of the beam with loads and constraints.*

Using the beam geometry depicted above, we want to find the best compromise between the amount of material used and the stiffness of the beam. In order to do that we need to convert this into a mathematically formal language for optimization. Every optimization problem consists of finding the best design vector \alpha, such that the objective function F(\alpha) is minimal. Mathematically, this is written as \displaystyle \min_{\alpha} F(\alpha).

The design vector choice defines the type of optimization problem that is being solved:

- If \alpha is a set of parameters driving the geometry (e.g., length or height), we are talking about
*parametric optimization*. - If \alpha controls the exterior curves of the geometry, we are talking about
*shape optimization*. - If \alpha is a function determining whether a certain point of the geometry is void or solid, we are talking about
*topology optimization*.

Topology optimization is applied when you have no idea of the best design structure. On the one hand, this method is more flexible than others because any shape can be obtained as a result. On the other hand, the result is not always directly feasible. As such, topology optimization is often used in the initial phase, providing guidelines for future design schemes.

In practice, we define an artificial density function \rho_{design}(X) , which is between 0 and 1 for each point X = \lbrace x,y \rbrace of the geometry. For a structural mechanics simulation, this function is used to build a penalized Young’s modulus:

E(X)= \rho_{design} (X) E_0

where E_0 is the true Young’s modulus. Thus, \rho_{design}= 0 corresponds to a void part and \rho_{design}= 1 corresponds to a solid part.

As mentioned before, in regards to the objective function, we want to maximize the stiffness of the beam. For structural mechanics problems, maximizing the stiffness is the same as minimizing the compliance. In terms of energy, it is also equivalent to minimizing the total strain energy defined as:

\displaystyle Ws_{total}=\int_\Omega Ws \ d\Omega=\int_\Omega \frac{1}{2} \sigma: \varepsilon\ d\Omega

Our topology optimization problem is thus written as:

\min_{\rho_{design}}\int_\Omega \frac{1}{2} \sigma (\rho_{design}): \varepsilon\ d\Omega

Now that our optimization problem has been defined, we can set it up in COMSOL Multiphysics. In this blog post, we will not detail the solid mechanics portion of our simulation. There are, however, several tutorials from our Structural Mechanics Module that help showcase this element.

When adding the *Optimization* physics interface, it is possible to define a *Control Variable Field* on a domain. As a first discretization for \rho_{design}, we can select a constant element order. This means that we will have one value of \rho_{design} through all the mesh elements.

After this step is completed, a new Young’s modulus can be defined for the structural mechanics simulation, such as E(X)=\rho_{design} E_0.

As referenced above, the objective function is an integration over the domain. In the *Optimization* interface, we select *Integral Objective*. The elastic strain energy density is a predefined variable named *solid.Ws*. Thus, the objective can be easily defined as \int_\Omega Ws \ d\Omega.

Our discussion today will not focus on how optimization works in practice. Basically, the optimization solver begins with an initial guess and iterates on the design vector until the function objective has reached its minimum.

If we run our optimization problem, we get the results shown below.

*Results from the initial test.*

The solution is trivial in order to maximize the stiffness. The optimal solution shows the full amount of the original material!

After this initial test, we can conclude that a mass constraint is necessary if we want to make the optimization algorithm select a design. With a constraint of 50 percent, this could be written as:

\int_\Omega \rho \leq 0.5M_0 \Leftrightarrow \int_\Omega \rho_{design} \leq 0.5V_0

In COMSOL Multiphysics, a mass constraint can be included by adding an *Integral Inequality Constraint*. Additionally, the initial value for \rho_{design} needs to be set to 0.5 in order to respect this constraint at the initial state.

Let’s have a look at the results from this new problem, which are illustrated in the following animation.

*Results with the addition of a mass constraint.*

While this result is better, a problem remains: We have many areas with intermediate values for \rho_{design}. For the design, we only need to know if a given area is void or not. In order to get mostly 1 or 0 for the \rho_{design}, the intermediate values must be penalized. To do so, we can add an exponent p in the penalized Young’s modulus expression:

E(X)=(\rho_{design})^p E_0

In practice, p is taken between 3 and 5. For instance, if p = 5 and \rho_{design}= 0.5, the penalized Young’s modulus will be 0.03125 E_0. The contribution for the mass constraint, meanwhile, will still be 0.5. As such, the optimization algorithm will try lending to 0 or 1 for the design vector.

With our new penalized Young’s modulus, we get the following result.

*Results with the new penalized Young’s modulus.*

A beam design has started to emerge! There is, however, a problematic checkerboard design, one that seems to be highly dependent upon the chosen mesh. In order to avoid the checkerboard design, we need to control the variations of \rho_{design} in space. One way to estimate variations of a variable field is to compute its derivative norm integrated on the whole domain:

\int_\Omega |\nabla \rho_{design}|^2 \ d\Omega

A new question arises: How can we minimize both the variation of \rho_{design} and the total strain energy?

Since a scalar objective function is necessary, these objectives must be combined. We can think about adding them, but first, the two expressions need to be scaled to get values around 1. Concerning the stiffness objective, we simply divide by Ws0, which is the value of the total strain energy when \rho_{design} is constant. In regards to the regularization term, we can take the following scaling factor \frac{h_0 h_{max}}{A}, where h_{max} is the maximum mesh size, h_0 is the expected size of details in the solution, and A is the area of the design space. Our final optimization problem is now written as:

\min_{\rho_{design}} \ \ {q\int_\Omega \frac{Ws}{Ws0} \ d\Omega + (1-q)\frac{h_0 h_{max}}{A}\int_\Omega |\nabla \rho_{design}|^2 \ d\Omega}

s.t. ~ \int_\Omega ρ_{design} \leq 0.5V_0

where the factor q controls the regularization weight.

Finally, the discretization of \rho_{design} needs to be changed to Lagrange linear elements to enable the computation of its derivative.

By solving this final problem, we obtain results that offer helpful insight as to the best design structure for the beam.

*Results with regularization.*

Such a design scheme can be seen at different scales in the real world, as illustrated in the bridge below.

*A warren-type truss bridge. Image in the public domain, via Wikimedia Commons.*

Now that we have set up our topology optimization method, let’s move on to a slightly more complicated design space. We want to answer the question of how to design a bridge above water. To do so, a road zone in the geometry must be defined where the Young’s modulus is not penalized.

*Design space for a through-arch bridge.*

After a few iterations, we obtain a very good result for the through-arch bridge, one that is quite impressive. Such a result could provide architects with a solid understanding of the design that should be used for the bridge.

*Topology optimization results for a through-arch bridge.*

While the mathematical optimization algorithm had no guidelines on the particular design scheme, the result depicted above likely brings a real bridge design to mind. The Bayonne Bridge, shown below, is just one example among many others.

*The Bayonne Bridge. Image in the public domain, via Wikimedia Commons.*

It is important to note that this topology optimization method can be used in the exact same way for 3D cases. Applying the same bridge design question, the animation below shows a test in 3D for the design of a deck arch bridge.

*3D topology optimization for a deck arch bridge.*

Here, we have described the basics of using the topology optimization method for a structural mechanics analysis. To implement this method on your own, you can download the Topology Optimization of an MBB Beam tutorial from our Application Gallery.

While topology optimization may have initially been built for a mechanical design, the penalization method can also be applied to a large range of physics-based analyses in COMSOL Multiphysics. Our Minimizing the Flow Velocity in a Microchannel tutorial, for instance, provides an example of flow optimization.

- “Optimal shape design as a material distribution problem”, by M.P. Bendsøe.
- Topology Optimization: Theory, Methods, and Applications, by M.P. Bendsøe and O. Sigmund.

With gas prices and environmental concerns on the rise, a heightened demand for energy efficient vehicles has emerged in recent years. In response to this growing demand, the automotive industry has turned to alternate design approaches for automobiles, one of which is using lightweight materials.

Compared to their heavier counterparts, lightweight automobiles not only feature a simplified design structure, but they are also more energy efficient. In fact, according to the Office of Energy Efficiency & Renewable Energy, reducing vehicle weight by 10% can lead to a fuel economy improvement of up to 8%.

While the lightest weight possible is ideal, designers and engineers must ensure that reducing the weight of a vehicle’s component does not impact its structural integrity. These lighter components must still be able to perform their designed function efficiently and resist failure. Using the example of a mounting bracket, we will demonstrate how simulation can help develop an optimal design that meets such requirements.

The Multistudy Optimization of a Bracket tutorial, available in our Application Gallery, features a bracket made of steel that supports a mounted component. When compared to the bracket, the mounted component can be observed as rigid.

*The bracket (shown in gray) supports a heavy component (shown in yellow).*

Because the bracket mounts the heavy component on a vibrating foundation, it is important that the natural frequency is maintained well above the excitation frequency. This will help avoid resonances. The shock loads that are applied to the bracket can be treated as a static acceleration load. The result? An optimization problem where our objective is to reduce the weight of the bracket, which is subjected to constraints on the natural frequency, maximum stress due to shock loads, and material dimensions. This means that the optimization problem should be solved over two different types of analyses: eigenfrequency and stationary.

To reduce the weight of the bracket, we drill holes into its vertical surface. At the same time, to offset the loss in stiffness, we alter the dimensions of the indentations. Six different geometrical parameters, summarized in the figure below, are included in the optimization.

*Geometrical parameters used in the bracket’s structural optimization.*

A series of geometrical constraints are applied to our simulation to ensure that the bracket geometry remains feasible. They are the following:

- The lowest natural frequency must be a minimum of 60 Hz
- The effective stress cannot exceed 80 MPa when exposure to a peak acceleration of 4g occurs in all three global directions at once
- There needs to be a minimum of 3 mm of material between two holes or a hole and an edge

The optimal geometrical parameters for our lightweight bracket are illustrated below. As you can see, three small holes have been introduced into the geometry. In its optimized state, the bracket weighs 178 g, which is 27 g less than its original weight.

*Initial optimization geometry.*

Let’s now shift focus to addressing our constraints, particularly the shock load applied to the bracket. Looking at the results in the plot below, we can see that the stress is less than 80 MPa on the entire geometry. As such, this bracket design successfully meets its stress constraint, balancing weight reduction with functionality.

*The stress from a shock load (peak acceleration) on the optimized bracket.*

While the optimal solution here includes three rather large holes and the widest indentation possible, note that other possible hole arrangements offer the same degree of weight reduction within a small tolerance. Thus, the design variables may differ at convergence.

- Download the tutorial model: Multistudy Optimization of a Bracket
- Check out the other blog posts within our Optimization category

After obtaining our measured data, the question then becomes this: How can we estimate the material parameters required for defining the hyperelastic material models based on the measured data? One of the ways to do this in COMSOL Multiphysics is to fit a parameterized analytic function to the measured data using the Optimization Module.

In the section below, we will define analytical expressions for stress-strain relationships for two common tests — the *uniaxial test* and the *equibiaxial test*. These analytical expressions will then be fitted to the measured data to obtain material parameters.

Characterizing the volumetric deformation of hyperelastic materials to estimate material parameters can be a rather intricate process. Oftentimes, perfect incompressibility is assumed in order to estimate the parameters. This means that after estimating material parameters from curve fitting, you would have to use a reasonable value for bulk modulus of the nearly incompressible hyperelastic material, as this property is not calculated.

Here, we will fit the measured data to several perfectly incompressible hyperelastic material models. We will start by reviewing some of the basic concepts of the nearly incompressible formulation and then characterize the stress measures for the case of perfect incompressibility.

For nearly incompressible hyperelasticity, the total strain energy density is presented as

W_s = W_{iso}+W_{vol}

where W_{iso} is the isochoric strain energy density and W_{vol} is the volumetric strain energy density. The second Piola-Kirchhoff stress tensor is then given by

S = -p_pJC^{-1}+2\frac{\partial W_{iso}}{\partial C}

where p_{p} is the volumetric stress, J is the volume ratio, and C is the right Cauchy-Green tensor.

You can expand the second term from the above equation so that the second Piola-Kirchhoff stress tensor can be equivalently expressed as

S = -p_pJC^{-1}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right)I-J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} C -\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)C^{-1}\right)

where \bar{I}_{1} and \bar{I}_{2} are invariants of the isochoric right Cauchy-Green tensor \bar{C} = J^{-2/3}C.

The first Piola-Kirchhoff stress tensor, P, and the Cauchy stress tensor, \sigma, can be expressed as a function of the second Piola-Kirchhoff stress tensor as

\begin{align}P& = FS\\

\sigma& = J^{-1}FSF^{T}

\end{align}

\sigma& = J^{-1}FSF^{T}

\end{align}

Here, F is the deformation gradient.

Note: You can read more about the description of different stress measures in our previous blog entry “Why All These Stresses and Strains?“

The strain energy density and stresses are often expressed in terms of the stretch ratio \lambda. The *stretch ratio* is a measure of the magnitude of deformation. In a uniaxial tension experiment, the stretch ratio is defined as \lambda = L/L_0, where L is the deformed length of the specimen and L_0 is its original length. In a multiaxial stress state, you can calculate principal stretches \lambda_a\;(a = 1,2,3) in the principal referential directions \hat{\mathbf{N}_a}, which are the same as the directions of the principal stresses. The stress tensor components can be rewritten in the spectral form as

S =\sideset{}{^3_{a=1}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

\sum S_{a} \hat{\mathbf{N}_{a}} \otimes \hat{\mathbf{N}_{a}}

where S_{a} represents the principal values of the second Piola-Kirchhoff stress tensor and \hat{\mathbf{N}_{a}} represents the principal referential directions. You can represent the right Cauchy-Green tensor in its spectral form as

C = \sideset{}{^3_{a=1}}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

\sum\lambda_a^2 \hat{\mathbf{N}_a}\otimes\hat{\mathbf{N}_a}

where \lambda_a indicates the values of the principal stretches. This allows you to express the principal values of the second Piola-Kirchhoff stress tensor as a function of the principal stretches

S_a = \frac{-p_p J}{\lambda_a^2}+2\left(J^{-2/3}\left(\frac{\partial W_{iso}}{\partial \bar{I_{1}}}+\bar{I_{1}} \frac{\partial W_{iso}}{\partial \bar{I_{2}}} \right) -J^{-4/3} \frac{\partial W_{iso}}{\partial \bar{I}_{2}} \lambda_a^2 -\frac{1}{\lambda_a^2}\left(\frac{\bar{I_{1}}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{1}} + \frac{2 \bar{I}_{2}}{3}\frac{\partial W_{iso}}{\partial \bar{I}_{2}}\right)\right)

Now, let’s consider the uniaxial and biaxial tension tests explained in the initial blog post in our Structural Materials series. For both of these tests, we can derive a general relationship between stress and stretch.

Under the assumption of incompressibility (J=1), the principal stretches for the uniaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda, \lambda_2 = \lambda_3 = \lambda^{-1/2}

The deformation gradient is given by

\begin{array}{c} F = \\ \end{array} \left(\begin{array}{ccc} \lambda &0 &0 \\ 0 &\frac{1}{\sqrt{\lambda}} &0 \\ 0 &0 &\frac{1}{\sqrt{\lambda}}\end{array}\right)

For uniaxial extension S_2 = S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_{1} = 2\left(\frac{1}{\lambda} -\frac{1}{\lambda^4}\right) \left(\lambda \frac{\partial W_{iso}}{\partial \bar{I}_{1_{uni}}}+\frac{\partial W_{iso}}{\partial \bar{I}_{2_{uni}}}\right) ,\; P_1 = \lambda S_1\; \sigma_1 = \lambda^2 S_1,\;\;\;\;

The isochoric invariants \bar{I}_{1_{uni}} and \bar{I}_{2_{uni}} can be expressed in terms of the principal stretch \lambda as

\begin{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{uni}} = \left(\lambda^2+\frac{2}{\lambda}\right) \\

\bar{I}_{2_{uni}} = \left(2\lambda + \frac{1}{\lambda^2}\right)

\end{align*}

Under the assumption of incompressibility, the principal stretches for the equibiaxial deformation of an isotropic hyperelastic material are given by

\lambda_1 = \lambda_2 = \lambda, \; \lambda_3 = \lambda^{-2}

For equibiaxial extension S_3 = 0, the volumetric stress p_{p} can be eliminated to give

S_1 = S_2 = 2\left(1-\frac{1}{\lambda^6}\right)\left(\frac{\partial W_{iso}}{\partial \bar{I}_{1_{bi}}}+\lambda^2\frac{\partial W_{iso}}{\partial \bar{I}_{2_{bi}}}\right),\; P_1 = \lambda S_1,\; \sigma_1 = \lambda^2 S_1\;\;\;\;

The invariants \bar{I}_{1_{bi}} and \bar{I}_{2_{bi}} are then given by

\begin{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

\bar{I}_{1_{bi}} = \left( 2\lambda^2 + \frac{1}{\lambda^4}\right) \\

\bar{I}_{2_{bi}} = \left(\lambda^4 + \frac{2}{\lambda^2}\right)

\end{align*}

Let’s now look at the stress versus stretch relationships for a few of the the most common hyperelastic material models. We will consider the first Piola-Kirchhoff stress for the purpose of curve fitting.

The total strain energy density for a Neo-Hookean material model is given by

W_s = \frac{1}{2}\mu\left(\bar{I}_1-3\right)+\frac{1}{2}\kappa\left(J_{el}-1\right)^2

where J_{el} is the elastic volume ratio and \mu is a material parameter that we need to compute via curve fitting. Under the assumption of perfect incompressibility and using equations (1) and (2), the first Piola-Kirchhoff stress expressions for the cases of uniaxial and equibiaxial deformation are given by

\begin{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

P_{1_{uniaxial}} &= \mu\left(\lambda-\lambda^{-2}\right)\\

P_{1_{biaxial}} &= \mu\left(\lambda-\lambda^{-5}\right)

\end{align*}

The stress versus stretch relationship for a few of the other hyperelastic material models are listed below. These can be easily derived through the use of equations (1) and (2), which relate stress and the strain energy density.

\begin{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

P_{1_{uniaxial}} &= 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10}+C_{01}\right)\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+\lambda^2 C_{01}\right)

\end{align*}

Here, C_{10} and C_{01} are Mooney-Rivlin material parameters.

\begin{align}\begin{split}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

P_{1_{uniaxial}}& = 2\left(1-\lambda^{-3}\right)\left(\lambda C_{10} + 2C_{20}\lambda\left(I_{1_{uni}}-3\right)+C_{11}\lambda\left(I_{2_{uni}}-3\right)\\

& \quad +C_{01}+2C_{02}\left(I_{2_{uni}}-3\right)+C_{11}\left(I_{1_{uni}}-3\right)\right)\\

P_{1_{biaxial}}& = 2\left(\lambda-\lambda^{-5}\right)\left(C_{10}+2C_{20}\left(I_{1_{bi}}-3\right)+C_{11}\left(I_{2_{bi}}-3\right)\\

& \quad +\lambda^2C_{01}+2\lambda^2C_{02}\left(I_{2_{bi}}-3\right)+\lambda^2 C_{11}\left(I_{1_{bi}}-3\right)\right)

\end{split}

\end{align}

Here, C_{10}, C_{01}, C_{20}, C_{02}, and C_{11} are Mooney-Rivlin material parameters.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{uni}}^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\mu_0\sum_{p=1}^{5}\frac{p c_p}{N^{p-1}}I_{1_{bi}}^{p-1}

\end{align}

Here, \mu_0 and N are Arruda-Boyce material parameters, and c_p are the first five terms of the Taylor expansion of the inverse Langevin function.

\begin{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

P_{1_{uniaxial}} &= 2\left(\lambda-\lambda^{-2}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{uni}}-3\right)^{p-1}\\

P_{1_{biaxial}} &= 2\left(\lambda-\lambda^{-5}\right)\sum_{p=1}^{3}p c_p \left(I_{1_{bi}}-3\right)^{p-1}

\end{align}

Here, the values of c_p are Yeoh material parameters.

\begin{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

P_{1_{uniaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-\frac{\alpha_p}{2}-1}\right)\\

P_{1_{biaxial}} &= \sum_{p=1}^{N}\mu_p \left(\lambda^{\alpha_p-1} -\lambda^{-2\alpha_p-1}\right)

\end{align}

Here, \mu_p and \alpha_p are Ogden material parameters.

Using the *Optimization* interface in COMSOL Multiphysics, we will fit measured stress versus stretch data against the analytical expressions detailed in our discussion above. Note that the measured data we are using here is the *nominal stress*, which can be defined as the force in the current configuration acting on the original area. It is important that the measured data is fit against the appropriate stress measure. Therefore, we will fit the measured data against the analytical expressions for the first Piola-Kirchhoff stress expressions. The plot below shows the measured nominal stress (raw data) for uniaxial and equibiaxial tests for vulcanized rubber.

*Measured stress-strain curves by Treloar.*

Let’s begin by setting up the model to fit the uniaxial Neo-Hookean stress to the uniaxial measured data. The first step is to add an *Optimization* interface to a 0D model. Here, *0D* implies that our analysis is not tied to a particular geometry.

Next, we can define the material parameters that need to be computed as well as the variable for the analytical stress versus stretch relationship. The screenshot below shows the parameters and variable defined for the case of an uniaxial Neo-Hookean material model.

Within the *Optimization* interface, a *Global Least-Squares Objective* branch is added, where we can specify the measured uniaxial stress versus stretch data as an input file. Next, a *Parameter Column* and a *Value Column* are added. Here, we define lambda (stretch) as a measured parameter and specify the uniaxial analytical stress expression to fit against the measured data. We can also specify a weighing factor in the *Column contribution weight* setting. For detailed instructions on setting up the *Global Least-Squares Objective* branch, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

We can now solve the above problem and estimate material parameters by fitting our uniaxial tension test data against the uniaxial Neo-Hookean material model. This is, however, rarely a good idea. As explained in Part 1 of this blog series, the seemingly simple test can leave many loose ends. Later on in this blog post, we will explore the consequence of material calibration based on just one data set.

Depending on the operating conditions, you can obtain a better estimate of material parameters through a combination of measured uniaxial tension, compression, biaxial tension, torsion, and volumetric test data. This compiled data can then be fit against analytical stress expressions for each of the applicable cases.

Here, we will use the equibiaxial tension test data alongside the uniaxial tension test data. Just as we have set up the optimization model for the uniaxial test, we will define another global least-squares objective for the equibiaxial test as well as corresponding parameter and value columns. In the second global least-squares objective, we will specify the measured equibiaxial stress versus stretch data file as an input file. In the value column, we will specify the equibiaxial analytical stress expression to fit against the equibiaxial test data.

The settings of the Optimization study step are shown in the screenshot below. The model tree branches have been manually renamed to reflect the material model (Neo-Hookean) and the two tests (uniaxial and equibiaxial). The optimization algorithm is a Levenberg-Marquardt solver, which is used to solve problems of the least-square type. The model is now set to optimize the sum of two global least-square objectives — the uniaxial and equibiaxial test cases.

The plot below depicts the fitted data against the measured data. Equal weights are assigned to both the uniaxial and equibiaxial least-squares fitting. It is clear that the Neo-Hookean material model with only one parameter is not a good fit here, as the test data is nonlinear and has one inflection point.

*Fitted material parameters using the Neo-Hookean model. Equal weights are assigned to both of the test data.*

Fitting the curves while specifying unequal weights for the two tests will result in a slightly different fitted curve. Similar to the Neo-Hookean model, we will set up global least-squares objectives corresponding to Mooney-Rivlin, Arruda-Boyce, Yeoh, and Ogden material models. In our calculation below, we will include cases for both equal and unequal weights.

In the case of unequal weights, we will use a higher but arbitrary weight for the entire equibiaxial data set. It is possible that you may want to assign unequal weights only for a certain stretch range instead of the entire stretch range. If this is the case, we can split the particular test case into parts, using a separate *Global Least-Squares Objective* branch for each stretch range. This will allow us to assign weights in correlation with different stretch ranges.

The plots below show fitted curves for different material models for equal and unequal weights that correspond to the two tests.

*Left: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. In these cases, equal weights are assigned to both test data. Right: Fitted material parameters using Mooney-Rivlin, Arruda-Boyce, and Yeoh models. Here, higher weight is assigned to equibiaxial test data.*

The Ogden material model with three terms fits both test data quite well for the case of equal weights assigned to both tests.

*Fitted material parameters using the Ogden model with three terms.*

If we only fit uniaxial data and use the computed parameters for plotting equibiaxial stress against the actual equibiaxial test data, we obtain the results in the plots below. These plots show the mismatch in the computed equibiaxial stress when compared to the measured equibiaxial stress. In material parameter estimation, it is best to perform curve fitting for a combination of different significant deformation modes rather than considering only one deformation mode.

*Uniaxial and equibiaxial stress computed by fitting model parameters to only uniaxial measured data.*

To find material parameters for hyperelastic material models, fitting the analytic curves may seem like a solid approach. However, the stability of a given hyperelastic material model may also be a concern. The criterion for determining material stability is known as *Drucker stability*. According to the Drucker’s criterion, incremental work associated with an incremental stress should always be greater than zero. If the criterion is violated, the material model will be unstable.

In this blog post, we have demonstrated how you can use the *Optimization* interface in COMSOL Multiphysics to fit a curve to multiple data sets. An alternative method for curve fitting that does not require the *Optimization* interface was also a topic of discussion in an earlier blog post. Just as we have used uniaxial and equibiaxial tension data here for the purpose of estimating material parameters, you can also fit the measured data to shear and volumetric tests to characterize other deformation states.

For detailed step-by-step instructions on how to use the *Optimization* interface for the purpose of curve fitting, take a look at the Mooney-Rivlin Curve Fit tutorial, available in our Application Gallery.

The dielectrophoretic effect will show up in both DC and AC fields. Let’s first look at the DC case.

Consider a dielectric particle immersed in a fluid. Furthermore, assume that there is an external static (DC) electric field applied to the fluid-particle system. The particle will in this case always be pulled from a region of weak electric field to a region of strong electric field, provided the permittivity of the particle is higher than that of the surrounding fluid. If the permittivity of the particle is lower than the surrounding fluid, then the opposite is true; the particle is drawn to a region of weak electric field. These effects are known as *positive dielectrophoresis* (pDEP) and *negative dielectrophoresis* (nDEP), respectively.

The pictures below illustrate these two cases with a few important quantities visualized:

- Electric field
- Maxwell stress tensor (surface force density)
- Surface charge density

*An illustration of positive dielectrophoresis (pDEP), where the particle permittivity is higher than that of the surrounding fluid \epsilon_p > \epsilon_f. At the surface of the particle, the induced surface charge is color-coded with red representing a positive charge and green a negative charge. Yellow represents a neutral charge.*

*An illustration of negative dielectrophoresis (nDEP), where the particle permittivity is lower than that of the surrounding fluid \epsilon_p < \epsilon_f.*

The Maxwell stress tensor represents the local force field on the surface of the particle. For this stress tensor to be representative of what forces are acting on the particle, the fluid needs to be “simple” in that it shouldn’t behave too weirdly either mechanically or electrically. Assuming the fluid is simple, we can see from the above illustrations that the net force on the particle appears to be in opposite directions between the two cases of pDEP and nDEP. Integrating the surface forces will indeed show that this is the case.

It turns out that if we shrink the particle and look at the infinitesimal case of a very small particle acting like a dipole in a fluid, then the net force is a function of the gradient of the square of the electric field.

Why is the net force behaving like this? To understand this, let’s look at what happens at a point on the surface of the particle. At such a point, the magnitude of the electric surface force density, f, is a function of charge times electric field:

(1)

f \propto \rho E

where \rho is the induced polarization charges. (Let’s ignore for the moment that some quantities are vectors and make a purely phenomenological argument by just looking at magnitudes and proportionality.)

The induced polarization charges are proportional to the electric field:

(2)

\rho \propto \epsilon E

Combining these two, we get:

(3)

f \propto \rho E = \epsilon E^2

But this is just the local surface force density at one point at the surface. In order to get a net force from all these surface force contributions at the various points on the surface, there needs to be a difference in force magnitude between one side of the particle and the other. This is why the net force, \bf{F}, is proportional to the gradient of the square of the electric field norm:

(4)

\mathbf{F} \propto \epsilon \nabla |\mathbf{E}|^2

In the above derivation, we have taken some shortcuts. For example, what is the permittivity in this relationship? Is it that of the particle or that of the fluid or maybe the difference of the two? What about the shape of the particle? Is there a shape factor?

Let’s now address some of these questions.

In a more stringent derivation, we instead use the vector-valued relationship for the force on an electric dipole:

(5)

\mathbf{F} = \mathbf{P} \cdot \nabla \mathbf{E}

where \bf{P} is the electric dipole moment of the particle.

To get the force for different particles, we simply insert various expressions for the electric dipole moment. In this expression, we can also see that if the electric field is uniform, we get no force (since the particle is small, its dipole moment is considered a constant). For a spherical dielectric particle with a (small) radius r_p in an electric field, the dipole moment is:

(6)

\mathbf{P} = 4 \pi r_p^3 k \mathbf{E}

where k is a parameter that depends on the the permittivity of the particle and the surrounding fluid. The factor 4 \pi r_p^3 can be seen as a shape factor.

Combining these, we get:

(7)

\mathbf{F} = 4 \pi r_p^3 k \mathbf{E} \cdot \nabla\mathbf{E} = 2 \pi r_p^3 k \nabla |\mathbf{E}|^2

This again shows the dependency on the gradient of the square of the magnitude of the electric field.

If the electric field is time-varying (AC), the situation is a bit more complicated. Let’s also assume that there are losses that are represented by an electric conductivity, \sigma. The dielectrophoretic net force, \bf{F}, on a spherical particle turns out to be:

(8)

\mathbf{F} = 2 \pi r^3_p k \nabla |\mathbf{E}_{\textrm{rms}}|^2

where

(9)

k = \epsilon_0 \Re\{ \epsilon_f \} \Re \left\{ \frac{\epsilon_p -\epsilon_f}{\epsilon_p + 2 \epsilon_f} \right\}

and

(10)

\epsilon = \epsilon_{\textrm{real}} -j \frac{\sigma}{2 \pi \nu}

is the complex-valued permittivity. The subscripts p and f represent the particle and the fluid, respectively. The radius of the particle is r_p and \bf{E}_{\textrm{rms}} is the root-mean-square of the electric field. The frequency of the AC field is \nu.

From this expression, we can get the force for the electrostatic case by setting \sigma = 0. (We cannot take the limit when the frequency goes to zero, since the conductivity has no meaning in electrostatics.)

In the expression for the DEP force, we can see that indeed the difference in permittivity between the fluid and the particle plays an important role. If the sign of this difference switches, then the force direction is flipped. The factor k involving the difference and sum of permittivity values is known as the *complex Clausius-Mossotti function* and you can read more about it here. This function encodes the frequency dependency of the DEP force.

If the particles are not spherical but, say, ellipsoidal, then you use another proportionality factor. There are also well-known DEP force expressions for the case where the particle has one or more thin outer shells with different permittivity values, such as in the case of biological cells. The simulation app presented below includes the permittivity of the cell membrane, which is represented as a shell.

*The settings window for the effective DEP permittivity of a dielectric shell.*

There may be other forces acting on the particles, such as fluid drag force, gravitation, Brownian motion force, and electrostatic force. The simulation app shown below includes force contributions from drag, Brownian motion, and DEP. In the Particle Tracing Module, a range of possible particle forces are available as built-in options and we don’t need to be bothered with typing in lengthy force expressions. The figure below shows the available forces in the *Particle Tracing for Fluid Flow* interface.

*The different particle force options in the *Particle Tracing for Fluid Flow* interface.*

Medical analysis and diagnostics on smartphones is about to undergo rapid growth. We can imagine that, in the future, a smartphone can work in conjunction with a piece of hardware that can sample and analyze blood.

Let’s envision a case where this type of analysis can be divided into three steps:

- Extract blood using the hardware, which attaches directly to your smartphone, and compute mean platelet and red blood cell diameter.
- Compute the efficiency of separation of the red blood cells and platelets. This efficiency needs to be high in order to perform further diagnostics on the isolated red blood cells.
- Use the computed optimum separation conditions to isolate the red blood cells using the hardware attached to your smartphone.

The COMSOL Multiphysics simulation app focuses on Step 2 of the overall analysis process above. By exploiting the fact that blood platelets are the smallest cells in blood and have different permittivity and conductivity than red blood cells, it is possible to use DEP for size-based fractionation of blood; in other words, to separate red blood cells from platelets.

Red blood cells are the most common type of blood cell and the vertebrate organism’s principal means of delivering oxygen (O_{2}) to the body tissues via the blood flow through the circulatory system. Platelets, also called *thrombocytes*, are blood cells whose function is to stop bleeding.

Using the Application Builder, we created an app that demonstrates the continuous separation of platelets from red blood cells (RBCs) using the *Dielectrophoretic Force* feature available in the *Particle Tracing for Fluid Flow* interface. (The app also requires one of the following: the CFD Module, Microfluidics Module, or Subsurface Flow Module and either the MEMS Module or AC/DC Module.)

The app is based on a lab-on-a-chip (LOC) device described in detail in a paper by N. Piacentini et al., “Separation of platelets from other blood cells in continuous-flow by dielectrophoresis field-flow-fractionation”, from *Biomicrofluidics*, vol. 5, 034122, 2011.

The device consists of two inlets, two outlets, and a separation region. In the separation region, there is an arrangement of electrodes of alternating polarity that controls the particle trajectories. The electrodes create the nonuniform electric field needed for utilizing the dielectrophoretic effect. The figure below shows the geometry of the model.

*The geometry used in the particle separation simulation app.*

The inlet velocity for the lower inlet is significantly higher (853 μm/s) than the upper inlet (154 μm/s) in order to focus all the injected particles toward the upper outlet.

The app is built on a model that uses the following physics interfaces:

*Creeping Flow*(Microfluidics Module) to model the fluid flow.*Electric Currents*(AC/DC or MEMS Module) to model the electric field in the microchannel.*Particle Tracing for Fluid Flow*(Particle Tracing Module) to compute the trajectories of RBCs and platelets under the influence of drag and dielectrophoretic forces and subjected to Brownian motion.

Three studies are used in the underlying model:

- Study 1 solves for the steady-state fluid dynamics and frequency domain (AC) electric potential with a frequency of 100 kHz.
- Study 2 uses a Time Dependent study step, which utilizes the solution from Study 1 and estimates the particle trajectories without the dielectrophoretic force. In this study, all particles (platelets and RBCs) are focused to the same outlet.
- Study 3 is a second Time Dependent study that includes the effect of the dielectrophoretic force.

You can download the model that the app was based on here.

To create the simulation app, we used the Application Builder, which is included in COMSOL Multiphysics® version 5.0 for the Windows® operating system.

The figure below shows the app as it looks like when first started. In this case, we have connected to a COMSOL Server™ installation in order to run the COMSOL Multiphysics app in a standard web browser.

*A biomedical simulation app running in a standard web browser.*

The app lets the user enter quantities, such as the frequency of the electric field and the applied voltage. The results include a scalar value for the fraction of red blood cells separated. In addition, three different visualizations are available in a tabbed window: the blood cell and platelet distribution, the electric potential, and the velocity field for the fluid flow.

The figures below show visualizations of the electric potential and the flow field.

*Screenshot showing the instantaneous electric potential in the microfluidic channel.*

*Screenshot displaying the magnitude of the fluid velocity.*

The app has three different solving options for computing just the flow field, computing just the separation using the existing flow field, or combining the two. A warning message is shown if there is not a clean separation.

Increasing the applied voltage will increase the magnitude of the DEP force. If the separation efficiency isn’t high enough, we can increase the voltage and click on the *Compute All* button, since in this case, both the fields and particle trajectories need to be recomputed. We can control the value of the Clausius-Mossotti function of the DEP force expression by changing the frequency. It turns out that at the specified frequency of 100 kHz, only red blood cells will exit the lower outlet.

The fluid permittivity is in this case higher than that of the particles and both the platelets and the red blood cells experience a negative DEP force, but with different magnitude. To get a successful overall design, we need to balance the DEP forces relative to the forces from fluid drag and Brownian motion. The figure below shows a simulation with input parameters that result in a 100% success in separating out the red blood cells through the lower outlet.

*Successful separation of red blood cells.*

To learn more about dielectrophoresis and its applications, click on one of the links listed below. Included in the list is a link to a video on the Application Builder, which also shows you how to deploy applications with COMSOL Server™.

- Model Gallery: Dielectrophoretic Particle Separation
- Video Gallery: How to Build and Run Simulation Apps with COMSOL Server™ (archived webinar)
- Wikipedia: Dielectrophoresis
- Wikipedia: Maxwell-Wagner-Sillars polarization
- Wikipedia: Clausius-Mossotti relation

*Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.*

Neutral particle beams of varying energies are an important element in many applications, including medicine, the design of scientific instruments, and materials processing. In the quest to accelerate neutral particles to extremely high velocities, we can turn our attention to the role of a charge exchange cell.

A *charge exchange cell* refers to an area of high-density gas that is placed in the path of an ion beam. In this region, fast ions from the beam can undergo charge exchange reactions with the background gas. This causes the ions to become neutralized, which in turn creates a neutral particle beam towards the end of the cell.

Let’s break down this process further, beginning with a charge exchange cell filled with neutral argon. As protons are accelerated through this medium, they are able to pick up electrons from the available argon atoms. This combination generates a neutral hydrogen atom, which travels rapidly out of the cell, and a slow-moving argon ion. The probability of capturing electrons, however, is relatively small. Thus, many charged particles can still remain in the beam as it leaves the cell.

So how do we achieve a completely neutral beam through this process? One approach is to use a pair of charged plates to deflect the protons prior to the beam’s arrival at its target. Using simulation, we can investigate the role of the gas cell and charged plates in the neutralization process.

*The charge exchange cell neutralization process.*

The Charge Exchange Cell model is used to carry out a beam neutralization process analysis. This model requires the Molecular Flow Module and the Particle Tracing Module.

The Charge Exchange Cell model features a cylindrical gas cell within a vacuum system. Neutral argon gas is provided by a shower head ring in the cell’s center. The shower head includes microchannels that control the cell’s neutral gas density, producing a high-pressure area within the instrument’s main vacuum system. The *Free Molecular Flow* interface is used to compute the number density and the pressure of argon gas within the gas cell.

*Surface plot of the pressure within the charge exchange cell.*

In this example, the electrically charged plates are represented as two blocks. The upper plate has an applied electric potential of 200 V, while the lower plate remains grounded. The *Electrostatics* interface is used to compute the electric potential between the plates, which can then be used to deflect the ions.

To model the collisions of the incoming ion beam with the neutral gas, the *Charged Particle Tracing* interface is used. This interface includes an Elastic Collision Force that takes the gas density computed by the *Free Molecular Flow* interface and uses it to determine the collision frequency.

The figure below shows the trajectories of the particles as they travel through the charge exchange cell. The dark gray lines indicate the trajectories of the ions, which have a charge number of 1. The light gray lines indicate the trajectories of the neutral particles, which feature a charge number of 0.

*Particle trajectories within the model. This image highlights how some ions undergo charge exchange reactions before exiting the cell.*

It is also possible to evaluate the total number of particles that hit a certain boundary. By comparing the number of particles that hit the grounded plate to the total number of particles in the model, we can estimate the neutralization efficiency of the gas cell. In this case, the neutralization efficiency is determined to be about 13.8%. Note that this value can vary slightly on different runs of the model because the charge exchange reactions between ions and neutrals occur randomly.

In particle tracing and ray tracing simulations, we often need to use the particle or ray properties to change a variable that is defined on a set of domains or boundaries. For example, solid particles in a fluid might exert a significant force on the surrounding fluid, and they may also erode the surfaces they hit.

In previous blog posts, I’ve discussed two other cases in greater detail: divergence of an electron beam due to self-potential and thermal deformation of lenses in a high-powered laser system. Each of these phenomena can be modeled using Accumulators or the specialized features that are derived from them.

An Accumulator is a physics feature that communicates information from particles or rays to the underlying finite element mesh. For each Accumulator feature in a model, a corresponding dependent variable, called an *accumulated variable*, is declared. These accumulated variables can be defined either within a set of domains or on a set of boundaries, and they can represent any physical quantity, making them extremely flexible.

The Accumulator features can be added to any of the physics interfaces of the Particle Tracing Module. They can also be used in the *Geometrical Optics* interface, available with the Ray Optics Module, and the *Ray Acoustics* interface, available with the Acoustics Module.

Depending on the physics interface, more specialized versions of the Accumulator may be available for computing specific types of physical quantities. For example, the *Particle Tracing for Fluid Flow* interface includes a dedicated *Erosion* boundary condition that includes several built-in models for computing the rate of erosive wear on a surface.

The Accumulators can be divided into three broad categories, which function in the following ways:

- Accumulators on boundaries increment a variable defined on a boundary element whenever a particle hits it.
- Accumulators on domains project information from each particle to the mesh elements the particle passes through.
- Nonlocal accumulators communicate information from a particle’s current position to the location where it was originally released.

We will now investigate each of these varieties in greater detail.

When particles or rays strike a surface, they can affect that surface in a wide variety of ways. For example, a laser can cause a boundary to heat up, sediment particles can erode their surroundings, and sputtering can occur when high-velocity ions strike a wafer in a process chamber. All of these effects require the same basic modeling procedure; we define a variable on the boundary and change its value when particles or rays interact with the boundary.

To begin, let’s consider a simple case in which we want to count the number of times a boundary is hit. We first define a variable, called `rpd`

, for example, which can have a distinct value in every boundary mesh element. Initially, this variable is set to zero in all elements. Every time a particle hits a mesh element on this boundary, we would like to increment the variable on that element by 1.

The values of the accumulated variable on the boundary elements (illustrated as triangles) after one collision are shown below:

To implement this in COMSOL Multiphysics, we first set up the particle tracing model, then add a “Wall” node to the boundary for which we want to count collisions. In this case, let’s specify that particles are reflected at this surface by selecting the Bounce wall condition. We then add the Accumulator node as a subnode to this Wall.

The settings shown in the following screenshot cause the accumulated variable (called `rpb`

) to be incremented by 1 (the expression in the Source edit field) every time a particle hits the wall.

I have created an animation that demonstrates how the number of collisions with each boundary element is counted over the course of the study. Check it out:

By changing the expression in the Source edit field, it is possible to increment the accumulated variable using any combination of variables that exist on the particle and on the boundary. For example, the accumulated variable may increase by a different amount based on the velocity or mass of incoming particles. The dependent variable need not be dimensionless. In fact, it can represent any physical quantity.

In addition to the generic Accumulator subnode — which can represent anything — dedicated accumulator-based features are available in the different physics interfaces, including the following:

- In the
*Charged Particle Tracing*physics interface:*Etch*(Use this to model physical sputtering of a surface by energetic ions.)*Current Density**Heat Source**Surface Charge Density*

- In the
*Particle Tracing for Fluid Flow*physics interface:*Erosion*(For computing the total mass removed from the surface or the rate of erosive wear.)*Mass Deposition**Boundary Load**Mass Flux*

- In the
*Geometrical Optics*physics interface:*Deposited Ray Power*(For computing a boundary heat source using the power of incident rays.)

We may also want to transfer information from particles to all of the mesh elements they pass through, not just the boundary elements they touch. We can do so by adding an Accumulator node to the physics interface directly, instead of adding it as a subnode to a Wall or other boundary condition.

For example, we can use an Accumulator to reconstruct the number density of particles within a domain. This technique is used in a benchmark model of free molecular flow through an s-bend in which the *Free Molecular Flow* interface is used to compute the number density of molecules in a rarefied gas.

Here is the geometry of the s-bend:

The settings window for the Accumulator is shown below.

The expression in the Source edit field is a bit more complicated than in the previous case. The source term R is defined as

(1)

R = \frac{J_{\textrm{in}} L}{N_{p}}

where J_{\textrm{in}} (SI unit: 1/(m^2 s)) is the molecular flux at the inlet, L (SI unit: m) is the length of the inlet, and N_{p} (dimensionless) is the number of model particles.

Physically, we can interpret R as the number of real molecules per unit time, per unit length in the out-of-plane direction, that are represented by each model particle. Because this source term acts on the time derivative of the accumulated variable, each particle leaves behind a “trail” in the mesh elements it passes through, which contributes to the number density in those elements.

I have created a second animation in which the number density of molecules is computed using the Accumulator (bottom) and the result is compared to the result of the *Free Molecular Flow* interface (top). Here it is:

We do see some noise in the particle tracing solution because each particle can only make a uniform contribution to the mesh element it is currently in. However, when the number of particles is large compared to the number of mesh elements, it is still possible to obtain an accurate solution.

In addition to the generic Accumulator node, which can represent anything, dedicated accumulator-based features are available in the different physics interfaces, including the following:

- In the
*Charged Particle Tracing*physics interface:- Particle-Field Interaction computes the charge density of particles, which can then be used as a source term to compute the self-potential of a beam of ions or electrons. It is also possible to compute the current density, which can create a significant magnetic field if the beam is relativistic.

- In the
*Particle Tracing for Fluid Flow*physics interface:- Fluid-Particle Interaction computes the body load exerted by particles on the surrounding fluid.

- In the
*Geometrical Optics*physics interface:- Deposited Ray Power generates a heat source term based on the amount of power absorbed by the medium if rays propagate through an absorbing medium.

The third variety of Accumulator is a bit more advanced than the previous two. A *Nonlocal Accumulator* is used to communicate information from a particle’s current position to the initial position from which it was released. The Nonlocal Accumulator can be added to an “Inlet” node, causing it to declare an accumulated variable on the mesh elements on the Inlet boundary.

The Nonlocal Accumulator can be used in some advanced models of surface-to-surface radiation. In many cases, the *Surface-to-Surface Radiation* physics interface (available with the Heat Transfer Module) can be used to efficiently and accurately model radiative heat transfer. However, the *Surface-to-Surface Radiation* interface relies on the assumption that all surfaces reflect radiation diffusely. That is, the direction of reflected radiation is completely independent of the direction of incident radiation. It cannot be used, for example, if some of the radiation undergoes specular reflection at smooth, polished, metallic surfaces.

One approach to modeling radiative heat transfer with a combination of specular and diffuse radiation is to use the *Mathematical Particle Tracing* interface, as demonstrated in the example of mixed diffuse and specular reflection between two parallel plates.

The incident heat flux on each plate is computed by releasing particles from the plate surface, querying the temperature of each surface the particles hit, and communicating this information back to the point at which the particles are initially released. The below image shows the temperature distribution between the two plates, where the top plate is heated by an external Gaussian source.

We have seen that Accumulators can be used to model interactions between particles or rays and any field that is defined on the surrounding domains of boundaries. The accumulated variables can represent any physical quantity. The Accumulator is the basic building block that allows for sophisticated one-way or two-way coupling between a particle- or ray-based physics interface and any of the other products in the COMSOL product suite.

The Accumulators and related physics features have too many settings and applications to discuss in detail in a single blog post. To learn more about the many options available, please refer to the User’s Guide for the Particle Tracing Module (for particle tracing physics interfaces), the Ray Optics Module (for the *Geometrical Optics* interface), or the Acoustics Module (for the *Ray Acoustics* interface).

If you are interested in learning more about any of these products, please contact us.

]]>