Tuning forks are made out many different materials, but most of them are calibrated to a standard pitch of 440 Hz for an A note. An article in *JOM* discusses how the frequency of a tuning fork changes with different materials and a fixed geometry (Ref. 1). This made me think: What if we change the geometry and material of a tuning fork to reach the desired frequency?

*A tuning fork.*

The Application Library contains several models featuring a tuning fork geometry, as well as a tuning fork simulation app. You can access the Application Library within the COMSOL® software GUI in the *File* menu and search the keyword “tuning fork”. For this blog post, we’ll start with the simple Tuning Fork model (and accompanying example app).

This model features a parametric geometry, material properties of steel, the *Solid Mechanics* interface, and two studies. Both studies perform an eigenfrequency analysis to search for the eigenfrequency around 440 Hz. The first study uses a parametric sweep of the tuning fork’s arm length, set as a parameter *L*, to find the optimal design for 440 Hz. In contrast, the second study applies a mathematical optimization algorithm that uses *L* as the control variable and the deviation from the target frequency as the optimization objective for fast, precise, and efficient optimization.

*Tuning Fork model showing the original settings to search for a tune of 440 Hz via two different strategies.*

Let’s return to the initial question: How does the tuning fork’s arm length depend on the material for a tune of 440 Hz?

First, we extend the model with a material switch. This option allows us to set and test various materials for the model. Further, this switch is needed to work with the material sweep in the *Study* node, as described later in this blog post. We can add available materials from the built-in Material Library, including:

- Aluminum
- Titanium beta-21S
- Copper
- Steel AISI 4340

The switch is assigned to the solid domains of the tuning fork.

*Setting the material switch for a multimaterial analysis.*

Now that we transformed the original model into a multimaterial model, we can adjust the studies. Due to the combination with a material sweep, the studies can now solve the physical model for all chosen materials and it is possible to analyze all of the results together. For example, we can look at the eigenfrequency and confirm that the eigenfrequency changes with different materials and arm lengths.

Various sweeps can be combined easily within a study, such as the extension of Study 1 with a material sweep. In contrast, when we try to add a material sweep to the optimization study for Study 2, we get an error message. The good news is that there is another way of achieving this by using *Study References*, as explained below.

Setting the material sweep directly in the study is not supported, since it is only possible to use one *Sensitivity*, *Optimization*, *Parameter Estimation*, or *Parametric/Material/Function Sweep* study step in each study. These study nodes tend to control the same solver settings and are therefore incompatible with each other. To perform a parametric or nested optimization, we can call a study containing an *Optimization* node from inside another study via a *Study Reference* node.

Therefore, we add an additional empty study and fill it with a *Material Sweep* node and a study reference pointing to the optimization study. In the *Optimization* node, we can define the settings needed for the optimization. This is possible as long as all entries are globally available. For that, we leave Study 2 containing the optimization as is.

*Use of an additional study to create nested studies.*

With all the discussed adjustments, we can now run the multimaterial optimization study by computing Study 3. This study controls the material assignment and runs individual optimization procedures for each by starting Study 2 automatically. Hence, we can extract and postprocess the individual design changes for the different materials. This can be done, for example, by a global evaluation of the parametric data set. Evaluating the tuning fork arm length, set as the control variable *L*, gives us the needed design changes to tune each tuning fork design to 440 Hz.

*Top: Settings of the global evaluation. Bottom: Results table for the different materials identified via the switch index.*

You can build an app from a tuning fork model that includes a customized user interface and restricted inputs and outputs. Take the Tuning Fork app in the Application Library as an example. This app can be used to quickly compute the frequency of a tuning fork with the prong length as an input or the optimal prong length with the frequency as an input.

*The user interface of an example tuning fork app.*

The example featured above can be used as inspiration for building apps of your own via the Application Builder tool in COMSOL Multiphysics.

In this blog post, we used material sweeps in combination with an optimization study to find the best geometry for tuning forks made of different materials. Note that the approach discussed here is generic. You might want to combine other study steps that have the same hierarchy.

With this approach, you can combine all of the *Sensitivity*, *Optimization*, *Parameter Estimation*, or *Parametric/Material/Function Sweep* features into nested studies, thus improving your simulations, results, and products.

Try it yourself: Access the Tuning Fork model and app by clicking the button below, which will take you to the Application Gallery. Then, you can download the MPH-files if you have a COMSOL Access account and valid software license.

- T.D. Burleigh and P.A. Fuierer, “Tuning Forks for Vibrant Teaching“,
*JOM*, pp. 26–27, Nov. 2005.

Current loudspeakers feature advanced connectivity and an improved frequency range. This enables them to interact with virtual assistants; stream music wirelessly; and connect to additional parts, such as subwoofers. These advancements create new design demands. For instance, some loudspeakers are waterproof so that they can be used while in the shower or by the pool. Perhaps a more serious concern is that loudspeaker designs should be durable so that consumers can start using the product right out of the box.

*Left: A loudspeaker with the grille removed, circa 1980s. Image by By PT35 — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons. Right: Example of a modern, portable speaker with* Bluetooth® *wireless technology. Image by TAKKA@P.P.R.S. — Own work. Licensed under CC BY-SA 2.0, via Flickr Creative Commons.*

Portability is one design factor that has become almost a requirement as consumer preferences change. Engineers need to design smaller, lighter loudspeakers that maintain sound quality and performance standards. To maximize loudspeaker performance while minimizing the overall weight, you can perform a topology optimization of loudspeaker driver components using the COMSOL Multiphysics® software and add-on Optimization Module.

In a loudspeaker driver, a magnetic circuit concentrates the magnetic flux into an air gap. A coil is placed in the air gap that winds perpendicular to the magnetic flux lines and is mechanically connected to the membrane of the speaker. When a current runs through the coil, electromagnetic forces induce movement. The membrane picks up this movement and interacts with the air, generating sound waves.

The magnetic circuit is constituted by an iron yoke, which performs two important functions:

- Maximizing the flux concentrated at the coil
- Providing a uniform field across the coil

In this example, the geometry for the circuit is similar to the one in the Loudspeaker Driver model. It is convenient to formulate the iron’s constitutive relation as a nonlinear relative magnetic permeability derived from its original B-H curve. This paves the way for the topology optimization, where the permeability will easily combine with the control variable field to drive the optimization process.

*To determine the optimal shape of an iron yoke for a magnetic circuit, you can use topology optimization.*

A typical figure of merit for the performance of the magnetic circuit is the parameter *BL*, or force factor, which is the product of the magnetic flux in the air gap and length of the coil. The larger the *BL* parameter, the higher the performance of the magnetic circuit. For a multiobjective study, you can scan the optimal shape for the component as a function of *BL* together with the weight reduction of the yoke.

After the topology optimization solution is found, you can extract the optimized geometry and reconstruct it for further analysis.

The original geometry includes an iron domain with a volume of 37 cm^{3}; specifically, the lower arm of the iron yoke. In the first two studies, the magnetic fields for the initial (suboptimal) circuit configuration are analyzed and the nonlinear relative permeability approach is validated by comparing magnetic fields with the standard B-H formulation.

*Left: The original geometry (red: iron, blue: air). Right: Magnetic flux density norm and lines for the initial configuration.*

By adding an *Optimization* interface, you can further reduce the volume of the iron yoke while preserving a high magnetic performance.

The third study begins with a two-step topology optimization, always seeking the highest possible *BL* and starting from a “full” cylindrical annulus with an iron volume of about 52 cm^{3}. The former condition is implemented as an integral objective on the *Coil* domain; the latter as an integral inequality constraint, where it is required that the volume of the yoke remains as close as possible to the target volume.

A reduction to 37 cm^{3} is targeted in Study 3 and the result, very similar to the original geometry, confirms that the original geometry was indeed already nearly optimal. Study 4 seeks the best performance with an even lighter geometry, expressed as a filling factor of 50% of the full volume; i.e., 26 cm^{3}.

*Left: The optimized geometry, where the volume of the lower arm of the iron yoke is 26 cm ^{3}. Right: Magnetic flux density norm and lines for the optimized configuration.*

The results for the two studies are comparable. Although the geometry with the optimized topology is lighter, there is no drawback in performance.

*A 3D revolution plot of the magnetic flux density norm, showing the final optimized geometry.*

These studies show that topology optimization can be used to find the best possible shape and constraint parameters for a loudspeaker component.

In preparation of further analyses on the optimal configuration, the final shape of the optimization analysis can be exported as an independent geometry.

*Left: A contour plot that defines the optimized iron/air threshold. Right: The optimized shape imported as a geometry object.*

Click the button below to try the Topology Optimization of a Magnetic Circuit tutorial yourself. Doing so will take you to the Application Gallery. From there, you can download the MPH-file if you have a COMSOL Access account and a valid software license.

Read more about topology optimization:

- How to Use Acoustic Topology Optimization in Your Simulation Studies
- Acoustic Topology Optimization with Thermoviscous Losses
- How to Use Topology Optimization Results as Model Geometries

*The* Bluetooth *word mark is a registered trademark owned by the Bluetooth SIG, Inc. and any use of such marks by COMSOL is under license.*

Additive manufacturing is the process of creating a 3D object by adding one or more materials on top of each other layer by layer. To learn more about this type of manufacturing, we reached out to Professor Frédéric Roger of the Mines Telecom Institute, Lille-Douai Center. (IMT is a French public institution dedicated to higher education, research, and innovation in engineering and digital technologies.)

Professor Roger says that, in a sense, additive manufacturing is a bit like sewing or weaving. In both processes, a heterogeneous finished product is created by controlling how different raw materials are consolidated. In weaving, the materials are usually thread and yarn; however, additive manufacturing can use many materials, including polymers, metal alloys, ceramics, and composites.

*Choosing the right materials is important for creating an ideal finished product, be it a warm blanket (left, woven by my grandmother) or a customized aerospace part (right). Right image in the public domain in the United States, via Wikimedia Commons.*

This wide range of materials means that additive manufacturing can be used to design a large amount of unique objects across many industries. For instance, Roger mentions that by using the right materials and thermodynamic conditions, engineers can make objects that withstand or adapt to severe environmental conditions. Such objects could even adapt to certain temperatures or chemical conditions by changing their shape or releasing chemical species (like drugs) that are trapped in a matrix. A transformation over time would add another dimension to the printed part, resulting in “4D printing”.

*Sometimes, additive manufacturing parts are inspired by natural forms, like the bio-inspired example pictured here. Image courtesy Frédéric Roger.*

According to Roger, the many opportunities that come with additive manufacturing make it “an unavoidable manufacturing process,” as it “offers new opportunities to develop optimized structures with advanced materials.” However, before engineers can create these structures, they have to improve the additive manufacturing process.

Since additive manufacturing is a complex process, it can be difficult to study. This technique varies based on the materials involved and the specific type of additive manufacturing. Studying this process also requires accounting for many different effects, such as:

- Multiple phase transformations
- Transfer of energy, mass, and momentum
- Sintering
- Photopolymerization
- Drying
- Crystallization
- Deformation
- Stress

To account for these factors, engineers can use the COMSOL Multiphysics® software, which Roger mentions is “a unique software that has great advantages in the simulation of additive manufacturing.” The software helps engineers to not only “optimize the additive manufacturing process but also to predict the mechanical and microstructural consequences on the product.” Through this, engineers can include all of the relevant physics and determine the ideal manufacturing conditions and part geometries that balance the needs of stiffness, weight reduction, and heat dissipation.

*Left: An example of the additive manufacturing process, which involves many different physics. Image by Les Pounder — Own work. Licensed under CC BY-SA 2.0, via Flickr Creative Commons. Right: Example of an additive manufacturing part created with two materials and filled with a honeycomb inner structure. Image courtesy Frédéric Roger.*

A challenge is that analyzing the additive manufacturing process while coupling the relevant physics can result in large model sizes and long computational times. To overcome this issue, Roger implements several different simulation strategies, such as activating mesh properties, using adaptive remeshing, and performing sequential simulations.

By taking a sequential approach, Roger is able to better analyze the succession of thermodynamic states that a material experiences during additive manufacturing. At the same time, this approach helps to reduce the complexity of the multiphysics couplings by dissociating them over time. As such, sequential simulations provide a way to comprehensively model and optimize the additive manufacturing process while reducing computational costs.

For their simulations, Roger and his team focused on fused-deposition modeling (FDM®), a common additive manufacturing technique that is both affordable and enables control over process parameters. The aim of the study was to optimize the internal and external geometry of a printed thermoplastic part and achieve the best possible performance. To accomplish these goals in an efficient manner, the team split their analysis into three parts, discussed below.

For more information about this study, check out the researchers’ paper.

In the first part of the study, the researchers wanted to minimize the total weight of a printed structure while maintaining a material distribution that maximizes stiffness. To do so, they used topological optimization and structural mechanics analysis to study a mechanical structure exposed to a tensile load.

*Original geometry and boundary conditions (left) and the Young’s modulus distribution that defines the optimal shape by color contrast (right). Left image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble presentation. Right image courtesy Frédéric Roger.*

Through the studies, they found an optimal shape for the part, determining that the middle of the shape has the highest stress levels. As such, the researchers divided the structure into domains based on the stress concentration field: a high-stress middle area surrounded by two low-stress areas. In the following study, they used this information to apply specific manufacturing conditions to the high-stress zone

*The stress fields in the optimized geometry. Image courtesy Frédéric Roger.*

In the second study, the researchers aimed to increase the stability of the high-stress zone in their part by testing two possible infill strategies:

- Heterogeneous filling with variable densities
- Multimaterial filling

In the heterogeneous case, the team created a more resistant domain in the high-stress middle area by using a higher density of infill. At the same time, they minimized the weight of the external areas by using less material. The results indicated that the ideal geometry contains 60% material in the high-stress region and 20% material in the low-stress regions.

*Printing an optimized part using one material with varying densities. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.*

As shown below, the multimaterial case involved using red ABS plastic on the ends of the part and black conductive ABS with improved mechanical properties in the middle. The team found that they could replace the conductive ABS with materials similar to ABS that have reinforced filters to increase stiffness.

*Printing an optimized part using two materials. Image by F. Roger and P. Krawczak and taken from their COMSOL Conference 2015 Grenoble paper.*

After optimizing the inner and outer designs of the 3D-printed part, the researchers modeled the fused thermoplastic deposition process and evaluated manufacturing parameters. The resulting simulations helped them to accurately predict thermal history, wetting conditions, polymer crystallization, interactions between filaments, and residual stresses and strains. One example is shown below, depicting the plastic strain during the heating and cooling process.

*The fusion and solidification of a disk that is irradiated by a laser beam as well as the resulting plastic strain evolution. This analysis takes Newtonian fluid flow and solid thermomechanical properties into account. Animation courtesy Frédéric Roger.*

The study also investigated the heat and mass transfer within the first two layers of a thin-walled tube. The researchers were then able to analyze the plastic droplet deposition process and identify areas where the filaments reached fusion temperature. The animations of the material deposition study are shown below. They depict a heat source moving along a deposition pattern and heating the filaments up to fusion temperature, ~230°C for ABS droplets. The extruder path domain in the simulations is premeshed and the meshes are continuously activated depending on the extruder’s position.

*Two-layer circular deposition (top). The moving heat source represents the hot ABS deposition. The thermal expansion of the two layers (amplified by a factor of five), showing the moving heat source activating the properties of the material (bottom). Here, blue indicates a nonactivated mesh and the physical properties (thermal conductivity and stiffness) are close to zero. Animations courtesy Frédéric Roger.*

Using these simulations, Roger and his team predicted the temperature field between the filaments during the deposition process, an important factor that affects filament adhesion. Similar analyses could help researchers compare different additive manufacturing conditions and determine the best deposition strategy for a specific application.

Roger says that these simulations enabled his team to “define an additive manufacturing part whose internal and external architectures give it the best possible industrial performance.” Of course, this is only the start of what can be achieved by combining additive manufacturing and multiphysics simulation.

If you have any tips for using COMSOL Multiphysics to study the additive manufacturing process, be sure to let us know in the comments below!

- Read more about the researchers’ work in their paper: “Optimal Design Of Fused Deposition Modeling Structures Using Comsol Multiphysics“
- Check out these related blog posts:

*FDM is a registered trademark of Stratasys, Inc.*

Topology optimization helps engineers design applications in an optimized manner with respect to certain *a priori* objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization.

A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries.

The governing equation is the standard wave equation with material parameters given in terms of the density and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, . This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1.

*Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot.*

Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the *Narrow Region Acoustics* feature in the *Pressure Acoustics, Frequency Domain* interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape.

For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects.

*Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity.*

An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot.

The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely.

Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the *Thermoviscous Acoustics* physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as

\vec{v}=\Psi_{v} \frac{\nabla{p}}{ik{\rho_0}c}

where the viscous field is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions.

In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as

T=\Psi_{h} \frac{p}{{\rho_0}{C_p}}

where the thermal field is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions.

As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme.

For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section.

For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as

(1)

\Psi_{v}+ k_v^{-2} \Delta_{cd} \Psi_{v}=1

where is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the *Narrow Region Acoustics* feature in the *Pressure Acoustics, Frequency Domain* interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure.

In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are

\Psi_{v} = 0 \thickspace (no slip)

and

\nabla_{cd}\Psi_{v} = 0 \thickspace (slip)

These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter:

a_{v} \Psi_{v}+k_{v}^{-2}\Delta_{cd}\Psi_{v}=f_{v}

We already know that for air domains, (a_{v},f_{v}) = (1,1), since that gives us the original equation (1). If we instead set a_{v} to a large value so that the gradient term becomes insignificant, and set f_{v} to zero, we get

a_{v} \Psi_{v} = 0

This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a_{v},f_{v}) should have values of (“large”,0). Thus, we have established our interpolation extremes:

a_{v}(\epsilon)= \left\{ \begin{array}{ll}1\ \textrm{for}\ \epsilon=o \thickspace(air) \\ large\ \textrm{for} \ \epsilon=1 \thickspace (solid) \end{array} \right.

and

f_{v}(\epsilon)= \left\{ \begin{array}{ll}1\ \textrm{for}\ \epsilon=o \thickspace (air) \\ 0\ \textrm{for} \ \epsilon=1\thickspace (solid) \end{array} \right.

I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a_{v} and f_{v} are input.

*Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air.*

The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions.

*Figure 4: The resulting field with contours for the setup in Figure 3.*

The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition.

Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this:

*Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz.*

A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied.

*Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz.*

The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz.

*Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively.*

For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example.

This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future.

- W.R. Kampinga, Y.H. Wijnant, A. de Boer, “An Efficient Finite Element Model for Viscothermal Acoustics,”
*Acta Acoustica*united with*Acoustica*, vol. 97, pp. 618–631, 2011. - M.P. Bendsoe, O. Sigmund,
*Topology Optimization: Theory, Methods, and Applications*, Springer, 2003.

René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids.

]]>

Ion funnels consist of a stack of ring electrodes that have decreasing inner diameters. Due to a combination of RF and DC potentials as well as the presence of a background gas, these devices can focus ion beams by confining ions radially and moving them toward the narrow end of the funnel. In doing so, the funnel can transport ions between the ion source and mass filter with minimal ion losses.

*Simulation of an ion funnel.*

Ion funnels can be used to inject ions into quadrupole mass filters and ion mobility spectrometers, enabling them to separate and analyze mixtures of ionized gases. These devices have a wide variety of applications, such as:

- Detecting explosives
- Studying complicated biological molecules
- Analyzing residual gas
- Identifying cancer during surgery
- Studying dinosaur skin

Of course, before ion funnels can be put to use, we need to gain insight into their design and functionality.

In this example, we analyze the focusing effect of an ion funnel that combines RF and DC potentials. The model contains a set of insulated ring-shaped electrodes that are exposed to an RF potential and have adjacent electrodes out of phase. In addition, there is a neutral argon buffer gas within the funnel. To model the interaction of the ions and neutral background gas, we use the *Collisions* node with an *Elastic* subnode and the Monte Carlo collision setting.

The RF potential radially confines the ions, and a DC bias directs them toward the increasingly narrow electrodes. The superposition of these two fields enables the funnel to focus the ions, sending them through the funnel and counteracting the thermal dispersion and Coulombic repulsion effects.

To create this model, we use three different interfaces in the COMSOL Multiphysics® software:

- The
*Electrostatics*interface to compute the DC fields - The
*Electric Currents*interface to compute the AC fields - The
*Charged Particle Tracing*interface to model ion movement through the funnel. This interface accounts for the interaction of the AC and DC fields and neutral particles in the gas, although the interactions between the ions themselves are not taken into account as their density is suitably low.

The simulation results for the ion funnel show that the positive ions are successfully moved from the wider end of the funnel to the narrow end via the gradual DC bias. To keep the ions inside the funnel, the AC voltage is kept out of phase between the adjacent electrodes. As seen below, this results in a very large electric potential gradient near the electrodes.

*The combined electric potential of the electrodynamic ion funnel when time = 0.*

Using this model, we also investigate the ion trajectories in the funnel. These trajectories show that the ions are confined to the increasingly small area. Due to this confinement, the ions can be efficiently transported to another device, such as a mass filter.

*Positive ion trajectories in the electrodynamic ion funnel.*

Moving on, let’s take a closer look at the ions located at the narrow end of the funnel. While the ions are released along the positive *x*-axis, they become uniformly distributed around the *z*-axis when they reach the end of the funnel.

*The* x*- and* y*-coordinates of the ions at the narrow end of the funnel. In this plot, blue indicates particles still in the funnel and red indicates particles that have exited the funnel. Note that these results may differ from the former two plots because the* Collisions *node uses random numbers to decide if a collision takes place at every time step.*

Want to try this ion funnel example? Access the model documentation and associated MPH-files via the button below.

]]>

Walking into a video game tournament for the first time, I was surprised by the number of CRT televisions in the room. Why use box TVs instead of the current flat-screen versions? When I asked my brother — who had participated in these tournaments before — he told me that CRT TVs are better for displaying certain games, as they deliver the desired response time and frames per second. This advantage makes CRT TVs popular in the gaming community.

*A CRT television. Image by Daniel Oines — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.*

Unlike newer models, these televisions rely on CRTs, which are vacuum tubes that control how electron beams reach a screen. To focus charged particle beams, some CRTs use einzel lenses. These lenses are also found in ion propulsion systems as well as ion and electron beam experiments.

The focusing ability of an einzel lens depends on the following factors:

- Initial particle energy
- Initial beam collimation
- Voltage at each electrode

To accurately study these factors in einzel lens designs, we can use particle tracing simulation.

This einzel lens example consists of three cylinders aligned on the same axis. Of these cylinders, the middle one maintains a fixed voltage, while the outer two cylinders are grounded.

The electrons studied with this model have an initial kinetic energy of 20 keV. Their speed is an appreciable fraction of the speed of light. As such, relativistic effects are taken into account.

We can solve this model by using two different studies and interfaces. The first is a stationary study that uses the *Electrostatics* interface to calculate the electric potential and 3D electrostatic field. We then use the corresponding electric fields to exert an electric force on the modeled electrons. Second, a time-dependent study and the *Charged Particle Tracing* interface can be used to determine the electron particle trajectories.

In the next section, we see the results of these studies.

Let’s take a look at the area around the electrodes (cylinders, in this case) where the beam is focused. In the left image below, we see the equipotential surfaces that surround the electrodes. We can also study the electric potential and fringe fields by looking at a cross section near the electrodes (shown in the right image below).

*The isosurfaces of the electric potential (left) and electric potential and fringe fields (right) around the electrodes.*

Expanding the view helps to visualize the electron trajectories through the einzel lens. As shown below, the particles reduce their speed as they approach the lens. When passing through the lens, they begin to accelerate again, eventually reaching their initial speeds.

*The left image shows the electron trajectories in an einzel lens. The colors seen here represent the ratio of the particle kinetic energy to the initial kinetic energy. The right image shows the electron trajectories and the isosurfaces of the electric potential.*

Next, we examine the nominal beam trajectory of the electrons through the einzel lens. We also account for a common measurement of the area taken up by a charged particle beam in transverse phase space: hyperemittance.

*The nominal beam trajectory. Here, the color expression depicts the beam hyperemittance.*

With particle tracing modeling, we are able to better analyze einzel lenses and can use these results to improve our designs. Try the einzel lens example yourself by clicking on the button below.

- Check out the Phase Space Distributions in Beam Physics blog series
- Learn about modeling particles in a quadrupole mass spectrometer on the COMSOL Blog
- Read this blog post on modeling particle migration in inertial focusing

When performing laboratory experiments, you rely on the precision and accuracy of the — often used — measurement equipment. While plenty of information is available in the equipment specifications, it usually applies to new, well-calibrated systems. However, you might forget to calibrate your devices, or the system shows a systematic bias due to wear and other processes.

If you have a data set exhibiting such errors, it is important to correct them so that you can analyze the measured data accurately. An applied example is an experiment of flow through a column, where you inject a chemical and record the breakthrough curve at the outlet. For further analysis, the set flow rate of the pump is used. However, due to calcification, the flow rates are systematically biased. Performing a multiparameter optimization with various flow rates enables you to obtain a factor to correct all of the data.

This optimization problem is based on a transient model using the COMSOL Multiphysics® software and *Transport of Diluted Species* interface. A complete model is a prerequisite for the optimization step. The model discussed here is set in 1D and has a geometry with a column that is 1 m in length.

For the transport properties, you can set the flow velocity, which is simply the flow rate multiplied by the opening width of the column. Further, you can assign *Inflow* and *Outflow* boundary conditions and a *Dirichlet* boundary condition at the inlet, set to a fixed concentration.

*Setting up the physical problem prior to the optimization.*

While the true velocity of the problem is unknown, you can rewrite it as the product `u_in*tuning`

. Here, `u_in`

represents the set flow rate and tuning of the global correction factor, which derives directly from ` u_in=Q/(A*tuning)`

. Hence, tuning accounts for the area change of the system.

In optimization jargon, `u_in`

is the experimental parameter identifying the individual experimental runs. The obtained concentration is our least-squares objective, which is compared to measured data, and tuning is the control variable.

Starting with the complete physical model, you can add two items to transform it into an optimization model. First, the *Optimization* interface in our example has two nodes: *Objective* and *Control Variable*. For any optimization study, these nodes are a prerequisite.

While there are many feasible optimization objectives, the least-squares objective is well defined and from the shape `Sum_i(u_obs_i-u_sim_i)`

. Hence, it minimizes the sum of the distances between all given data points.^{ 2}

Due to the strict formal approach, there is no need to express the objective function. However, you need a data file that contains all information needed for a least-squares objective. It is important to note that the data file needs to be structured in columns. You can assign the individual columns in the subnodes.

Note that the order of nodes in the Model Builder tree (from top to bottom) corresponds to the order of columns in the data file (from left to right). It is also important that every column of the data file is identified by an appropriate subnode.

*Assign the nodes in the least-squares objective (top down) to the semicolon-separated columns (left to right) in the data file.*

The transport optimization example requires four columns:

In the example, you set the identifier `u_in`

in the parameter column. This is the flow rate of the pump used to discriminate between the different experiments as well as the same parameter that is assigned under *Global Definitions* > *Parameters*. You can also find this parameter used for the transport properties in the *Transport of Diluted Species* interface.

The times stated in the data file need to be in SI units, seconds. However, in general, these times don’t need to match the stored output times accurately. Nevertheless, good accuracy is still recommended.

In the value column, you give the expression, which is evaluated from the numerical model outcome. This should be entered in the way that it represents the exact metric of the recorded data. *Variable Name* refers to the measured data, which can be accessed during postprocessing by using such a name.

The stated coordinate in the file is the destination where measurements are made. There is also one specialty that must be considered: The number of coordinate columns in the data file must be the same as the dimension of the geometry, even when the selected *Least-Squares Objective* feature is on a lower dimension. In that case, model expressions are evaluated at the nearest points on the given selection.

Here, you make two adjustments by setting the method to the well-known Levenberg-Marquardt algorithm, designed to tackle least-square problems efficiently. Since the goal is to perform a multiparameter study, you can switch the least-squares time/parameter method to: *From least squares objective*. The other settings can be left as default for now.

The correct time-stepping and parameter sweeps are recognized directly from the data file and there is no need to set it individually in the *Step 1: Time dependent* settings. Eventually, using such settings, the solver can sum up all squared deviations for all time steps and parameters, as well as search and minimize such sums by finding a global correction factor that is appropriate for all individual experiments.

*Settings of the Optimization study step.*

With all of these settings, you have a very generic model that can be applied to many experimental runs by updating only the underlying data file. The settings automatically adjust the model to the experimental parameters. So far, this includes variations of the amount of experimental parameters’ recorded sample times.

Such models could be easily extended to consider more parameter variations; e.g., variations of input concentration or more measurement locations in an analog manner. Further steps could be used to transform a model into an application, where you can freely choose the length of the column, hence the geometry. This way, you end up with a powerful tool to evaluate your experiments and assure their quality.

*Results for a multiparameter fit based on three individual measurements (symbols) with the simulated and optimized output (lines).*

- Try a liquid chromatography tutorial model:
- Read this related blog post:

The idea behind natural selection is based on the fact that there is a great deal of variation in the world; even organisms of the same species differ from one another. The organisms with traits best suited for their environment are more likely to survive and pass those traits on to the next generation, which increases the occurrence of the beneficial traits in future generations. As a result, each successive generation becomes more adapted to their environment. That’s natural selection, in a nutshell.

*Over time, natural selection can result in the perpetuation of beneficial traits in a set of organisms, such as long necks in giraffes.*

Genetic algorithms take the principles of natural selection and apply them to optimization problems. They study a set of “individuals” in an environment that has known values and characteristics. The studied population of individuals represents potential solutions to the problem. Appropriate mathematical functions define the fitness of each individual by representing how well adapted they are to the environment.

As with natural selection, using genetic algorithms is an iterative process, with the individuals in each “generation” potentially experiencing mutations and exchanging design characteristics. The process continues until the algorithms find an optimized solution for the particular environment or reach a maximum number of generations.

A research group from Universidad Autónoma de San Luis Potosí and Instituto Tecnológico de San Luis Potosí used genetic algorithms to improve the geometry of an optical antenna (nanoantenna). They wanted a design that optimally concentrates the electromagnetic field of a dipole nanoantenna at a 500-THz resonance frequency, which is a variable that depends on the dimensions of the nanostructure.

As an additional point of study, the team investigated if conventional RF macroscopic antenna geometries can be efficiently used for nanoscale optical frequency regimes. Here’s what they found out…

In their research, the team used LiveLink™ *for* MATLAB® to seamlessly link the MATLAB® software and the COMSOL® software. In doing so, they were able to use MATLAB® software to automatically drive their COMSOL Multiphysics® simulations. Using MATLAB® software, the researchers designed a user interface and genetic algorithm. The genetic algorithm was programmed with the Global Optimization Toolbox, available as an add-on to the MATLAB® software. The algorithm performed analyses to suggest various iterative design changes for the dipole nanoantenna geometry in 2D. These genetic algorithm results were then compared with the optimal conditions to evaluate their fit.

*The lines show how mutations modify the antenna geometry during the genetic algorithm process. Image by R. Diaz de Leon-Zapata, G. Gonzalez, A.G. Rodríguez, E. Flores-García, and F.J. Gonzalez and taken from their COMSOL Conference 2016 Boston paper.*

Whenever the algorithm yielded a design possibility, the researchers could automatically solve the electromagnetic equations, generate antenna response data for a given frequency, and evaluate the fitness function without having to interact with the graphical user interface in COMSOL Multiphysics. The optimized fitness function determined the minimum loss of the electromagnetic field at a frequency of 500 THz. These findings were then used in MATLAB® software to perform further genetic algorithm adjustments. This entire iterative loop was performed automatically, increasing the efficiency of the study.

*The trend toward a minimized electromagnetic field loss for each new generation. Image by R. Diaz de Leon-Zapata et al. and taken from their COMSOL Conference 2016 Boston paper.*

The automatic process continued until the software reached convergent results for an improved antenna geometry. Using this procedure, the researchers obtained their final results at a lower computational cost and a shorter processing time than traditional analytical processes.

*The final antenna geometry attained from genetic algorithm optimization. Image by R. Diaz de Leon-Zapata et al. and taken from their COMSOL Conference 2016 Boston paper.*

Let’s take a closer look at the final design of the nanoantenna. The final design iteration (below, to the right) is quite different from the design of a conventional dipole antenna (below, to the left). The final design is the most fit in this case, since it has the lowest electromagnetic field loss at the center of the structure.

*Comparison of the electromagnetic field concentrations for a classical dipole antenna (left), the first design iteration from the genetic algorithm (middle), and the final geometry (right). Image by R. Diaz de Leon-Zapata et al. and taken from their COMSOL Conference 2016 Boston paper.*

To visualize this in a different way, the research group plotted the electromagnetic field concentration for a conventional dipole antenna and the finalized geometry over a range of frequencies. The comparison shows that while both geometries encompass the same effective area, the version created using the genetic algorithm represents an optimal concentration. Compared with previous studies, the final design also has an increased electromagnetic signal and bandwidth.

*Comparison of the electromagnetic field concentration for a classical dipole geometry (red) and the geometry generated from the genetic algorithm (blue). Image by R. Diaz de Leon-Zapata et al. and taken from their COMSOL Conference 2016 Boston paper.*

Using these findings, the researchers concluded that the genetic algorithm can enhance the maximum electromagnetic field concentration in the optical frequency regime. The team’s next step is to use the results of this analysis to study nanostructure fabrication and characterization, which could help foster the creation of renewable energy devices.

- Read the research team’s original work: “Genetic Algorithm for Geometry Optimization of Optical Antennas“
- Watch this 18-minute video to learn about using LiveLink™
*for*MATLAB® - Read a blog post to get an introduction to designing antennas in COMSOL Multiphysics

*MATLAB is a registered trademark of The MathWorks, Inc.*

Pipelines are an economic approach to transporting fluids like oil, natural gas, and water across land and sea, though they are expensive to build. These structures are comprised of steel or plastic tubes that are typically buried or run at the bottom of the sea, with pump stations distributed throughout the system to keep the fluid moving.

As a petroleum mixture is pumped through a pipeline, it generates heat as a result of internal friction forces. The origin of this heat is the energy supplied by the pump. This heat is quickly dissipated if the pipeline runs through cold environments. Eventually, the temperature of the mixture reaches the same temperature as that of the environment, if the pipeline is not insulated. At lower temperatures, oil becomes more viscous, which increases the energy consumption of the pumps. On top of that, cold petroleum mixtures require preheating before they can be used in the refinery. The preheating process consumes energy and requires investment to build and maintain.

*Pipelines are used to transport fluids throughout the world.*

It is an easy and obvious thing to insulate a pipeline to avoid the decrease in temperature of the oil by keeping the energy supplied by the pumps inside the pipe. The trick is to insulate the pipeline well enough, but not more, so that the return on investment motivates the cost of the insulation. If the petroleum mixture temperature can be kept at a high enough level, the cost of the preheating process can be eliminated and the pump’s energy consumption can be reduced substantially. The reduction of theses costs have to motivate the investment in insulation.

The fluid flow and heat transfer processes in the pipeline can be modeled and simulated accurately using the COMSOL Multiphysics® software. The models can be used to design an insulation that is as inexpensive as possible, yet as efficient as required to keep the oil at the desired temperature.

Our Insulation of a Pipeline Section tutorial model features a 150-km pipeline section that has an inlet temperature of 25°C. The oil that enters the pipeline flows at a rate of 2500 m^{3}/h. To set up and solve the energy and flow equations that describe the transport of the fluid within the pipeline, we use the *Nonisothermal Pipe Flow* interface.

This particular case involves analyzing one pipe wall and one insulation layer, as highlighted in the schematic below. Here, the dark and light gray layers represent a two-layered wall, while light blue represents the film resistances on the inside and outside of the walls. Note that the pipe wall is 2 cm thick in this example.

*Schematic of the pipeline cross section, where h _{int} and h_{ext} are the film heat transfer coefficients inside and outside of the tube and k_{ins} and k_{wall} are the thermal conductivity of the insulation and the wall, respectively.*

In the first study, we calculate the temperature along the pipeline for two different cases: one where perfect insulation is assumed and one where the pipeline has no insulation. The plot below shows that the heat resulting from friction forces in the fluid causes its temperature to increase by about 3°C over 150 km. When no insulation is added to the pipeline, the outlet temperature is similar to that of the surroundings.

*Plot comparing the fluid temperature when there is perfect insulation on the pipeline (green) and when there is no insulation (blue).*

With an understanding of the fluid flow and heat transfer processes, we can perform optimization calculations to identify the minimum insulation thickness required to keep the oil temperature at a constant level throughout the pipeline. The results from this particular optimization study indicate that the minimum insulation thickness is around 8.9 cm. We could also perform a similar optimization study, but for a minimum acceptable oil temperature level at the end of the pipeline, which could potentially reduce the insulation thickness (and the cost) even more.

Based on the calculated minimum thickness for the insulation, we may estimate the investment costs and decide if these costs are motivated by the reduced pumping and preheating costs. We can potentially reduce energy consumption in the pumping process and eliminate it for preheating, making the process more efficient and environmentally friendly.

]]>

Vacuum technology is found in many high-tech applications, including semiconductor processing, mass spectrometry, and materials processing. This technology creates low-pressure environments by using vacuum pumps to remove air molecules from enclosed vacuum chambers.

One type of vacuum pump is a turbomolecular pump, which consists of a bladed molecular turbine. The blades of modern turbomolecular pumps rotate extremely quickly, reaching speeds as high as 90,000 rpm.

*A turbomolecular pump.*

The momentum transfer from the rotating blades to the gas molecules compresses the gas, which is moved from the inlet to the outlet by the blades. As a result, the pump is able to generate and maintain a high vacuum on the inlet side of the blades. This pumping process is more efficient in the free molecular flow range, since the gas particles mostly collide with the rotor and not with each other.

To better understand and design turbomolecular pumps, you can model them with COMSOL Multiphysics. But first, let’s figure out the best way to do so.

Instead of focusing on the whole turbomolecular pump, our model geometry depicts part of a single turbomolecular pump stage (a row of blades). Using the model, we calculate gas molecule trajectories in the empty space between the blades. This enables us to assume sector symmetry in the modeling domain.

*Model geometry of one sector of one stage of a turbomolecular pump. Gray represents the space between two blades, green represents the blade walls, and black represents the rotor root.*

While we don’t use it here, one way to solve the model equations and calculate the pump’s performance in a free molecular flow regime is with the *Free Molecular Flow* interface from the Molecular Flow Module. This interface is an efficient option and is useful in cases where the molecules of extremely rarefied gases move significantly faster than any object in the modeling domain. However, in turbomolecular pumps, the speed of the gas molecules is comparable with the blade speed. As such, we need a different approach for this problem.

*The turbomolecular pump example model.*

We use a Monte Carlo approach and the *Rotating Frame* feature (new to the Particle Tracing Module in version 5.3 of COMSOL Multiphysics) to automatically apply the fictitious Coriolis and centrifugal forces to the particles. This enables us to compute the particle trajectories within a noninertial frame of reference that moves along with the blades.

This method provides accurate results on how the blade velocity ratio affects the pumping characteristics, such as the maximum compression ratio, transmission probability, and maximum speed factor. We base these characteristics on the transmission probability of argon atoms from the inlet to the outlet and vice versa.

For more information on how we created this model, including the geometric parameters and assumptions, check out the documentation for the turbomolecular pump tutorial.

Let’s begin by computing the transmission probabilities for particles propagating in the forward (inlet to outlet) and reverse (outlet to inlet) directions. As expected, when the blades are at rest, these probabilities are about equal. This is because there is no distinction between the two directions.

However, when the rotation of the blades begins to increase, the particles are more likely to be transported forward through the pump, as the walls successfully transfer momentum to the argon atoms. This corresponds to an increasing compression ratio.

*The fraction of particles transmitted in the forward direction (left) and the reverse direction (right) as a function of blade velocity ratio.*

We also investigate how the compression ratio and speed factor are affected by the blade velocity ratio. To produce enough compression and speed, pumps use multiple bladed structures comprised of several disks and different types of blades. Blades close to the inlet have a high pumping speed and low compression ratio, while blades close to the outlet have the opposite characteristics.

When the velocity of these blades increases, as seen in the plots below, the maximum compression and speed factor increase. This confirms that the two blade types work together to enhance the performance of the pump.

*The effect of blade velocity on the maximum compression ratio (left) and maximum speed factor (right).*

This example highlights the new modeling features that enable you to more easily analyze turbomolecular pumps. Try it yourself by clicking on the button below.

- Check out these blog posts on particle tracing:

In the 1960s, G. Segré and A. Silberberg observed a surprising effect: When carried through a laminar pipe flow, neutrally buoyant particles congregate in a ring-like structure with a radius of about 0.6 times the pipe radius. This correlates to a distance from the parallel walls of around 0.2 times the width of the flow channel. The reason for this behavior, as they would discover decades later, could be traced back to the forces that act on particles in an inertial flow.

Today, we use the term inertial focusing to describe the migration of particles to a position of equilibrium. This technique is widely used in clinical and point-of-care diagnostics as a way to concentrate and isolate particles of different sizes for further analysis and testing.

*Many types of medical diagnostics use inertial focusing for testing and analysis. Image in the public domain, via Wikimedia Commons.*

In order for inertial focusing to be effective in these and other applications, accurately analyzing the migration patterns of the particles is a key step. A new benchmark example from the latest version of COMSOL Multiphysics — version 5.3 — highlights why the COMSOL® software is the right tool for obtaining reliable results.

For this example, we consider the particle trajectories in a 2D Poiseuille flow. To account for relevant forces, we use derived expressions from a similar migration of particles in a 2D parabolic flow inside of two parallel walls (see Ref. 2 in the model documentation). Built-in corrections for both the lift and drag forces allow us to account for the presence of these walls in the simulation analysis.

Note: Lift and drag forces make up the total force acting on neutrally buoyant particles inside a creeping flow. By definition, the gravitational and buoyant forces cancel out one another.

We assume that the lift force acts only perpendicular to the direction of the fluid velocity. It is also assumed that the spherical particles are small in comparison to the width of the channel and that they are rotationally rigid.

To compute the velocity field, we use the *Laminar Flow* physics interface. This is then coupled to the *Particle Tracing for Fluid Flow* interface via the *Drag Force* node. Thanks to the *Laminar Inflow* boundary condition, we can automatically compute the complete velocity profile at the inlet boundary. For the laminar flow of a Newtonian fluid inside two parallel walls, it is known that the profile will be parabolic. This means that we could have directly entered the analytic expressions for fluid velocity. However, we opt to use the *Laminar Flow* physics interface in this case, as it demonstrates the workflow that is most appropriate for a general geometry.

Now let’s move on to the results. First, we can look at the fluid velocity magnitude in the channel. As expected, the velocity profile is parabolic. Note that the aspect ratio of the geometry is 1000:1, so the channel is very long compared to its height. The plot uses an automatic view scale to make the results easier to visualize.

*The parabolic fluid velocity profile within a channel that is bound by two parallel walls.*

We can then shift our attention to the trajectories of the neutrally buoyant particles. Note that in the plot below, the color expression represents the *y*-component of the particle velocity in mm/s. The results indicate that all of the particles are close to equilibrium positions at distances of about 0.3 D on either side of the center of the channel. (D represents the width of the channel). It does, however, take longer for particles released near the center of the channel to reach these positions. Their initial force is weaker as they are released in the area where the velocity gradient is smallest. From the plots, we can see that the particles converge at heights that are 0.2 and 0.8 times the width of the channel. These findings show good agreement with experimental observations.

*The trajectory of particles inside the channel.*

The last two plots show the average and standard deviation of the normalized distance between the particles and the center of the channel. These results verify that the equilibrium distance from the center of the channel is in fact around 0.3 D.

*The average (left) and standard deviation (right) of the normalized distance between the particles and center of the channel.*

In order to effectively use inertial focusing for medical and other applications, you need to first understand the behavior of particles as they migrate through a channel to positions of equilibrium. With COMSOL Multiphysics® version 5.3, you can perform these studies and generate reliable results. This accurate description of inertial focusing serves as a foundation for analyzing and optimizing designs that rely on this technique.

Now it’s your turn! Give our new benchmark model a try:

Interested in learning about further updates in version 5.3 of COMSOL Multiphysics? You can get the full scoop in our 5.3 Release Highlights.

]]>