Almost all media absorb electromagnetic radiation to some extent. In highpowered laser focusing systems, a medium such as a glass lens may absorb enough energy from the laser to heat up significantly, resulting in thermal deformation and changing the material’s refractive index. These perturbations, in turn, can change the way the laser propagates. With the Ray Optics Module, it is possible to create a fully selfconsistent model of laser propagation that includes thermal and structural effects.
To understand how ray trajectories are affected by selfinduced temperature changes, consider a collimated beam that strikes a pane of glass at normal incidence. Assume that an antireflective coating has been applied to the glass surface so that the rays are not reflected. A typical pane of glass absorbs a very small, but nonzero, fraction of the power transmitted by the beam. If the power is sufficiently low, the temperature change within the glass will be negligible, and the outgoing rays will be parallel to the incoming rays.
However, if a large amount of power is transmitted by the beam, the power absorbed by the pane of glass may substantially alter the temperature of the glass. The glass expands slightly, changing the angle of incidence of the rays and causing the transmitted rays to be deflected from their initially parallel trajectories. In addition, many materials have temperaturedependent refractive indices, and the temperatureinduced change in the refractive index can also perturb the ray trajectories. Because the structural deformation and the change in refractive index tend to focus the outgoing rays, this phenomenon is known as thermal lensing.
Next, we take a more indepth look at an application in which thermal and structural effects can significantly perturb ray trajectories.
Consider a basic laser focusing system that consists of two planoconvex lenses. The first lens collimates the output of an optical fiber while the second lens focuses the collimated beam toward a small target.
If the laser beam delivers a small amount of power, then it is straightforward to model the propagation of the beam toward the target by using the Geometrical Optics interface and ignoring the temperature change in the lenses. The following image shows the trajectories of the rays in the lens system.
However, even a highquality glass lens absorbs a small fraction of the electromagnetic radiation that passes through it. If the optical fiber delivers a very large amount of power, then a thermally induced focal shift may occur; in other words, the changes in refractive index and lens shape can move the focus of the beam by a significant amount. If it is necessary to focus the laser accurately, then the possibility of thermally induced focal shift must be taken into account when designing the lens system.
In this example, we will observe how the temperature change in the lenses causes the beam to be focused at a location several millimeters away from the target.
To model ray propagation in the thermally deformed lens system, we use the following physics interfaces:
The physics interfaces and nodes used in this model are shown in the following screenshot.
In addition to the Ray Optics Module, either the Structural Mechanics Module or the MEMS Module is needed to model the thermal expansion of the lenses.
Under the hood, the Ray Optics Module computes the ray trajectories by solving a set of coupled firstorder ordinary differential equations,
(1)
where \mathbf{q} is the ray position, \mathbf{k} is the wave vector, and \omega is the angular frequency. The wave vector and angular frequency are related by
(2)
where c is the speed of light in a vacuum. In an absorbing medium, the refractive index can be expressed as ni\kappa, in which n and \kappa are realvalued quantities.
As the rays enter and leave the lenses, they undergo refraction according to Snell’s Law,
(3)
where \theta_1 and \theta_2 are the angle of incidence and the angle of refraction, respectively.
The intensity and power of the refracted rays are computed using the Fresnel Equations. In most industrial laser focusing systems, an antireflective coating is applied to the surfaces of the lenses to prevent large amounts of radiation from being reflected.
In this example, the antireflective coating is modeled by applying a Thin Dielectric Film node to the surfaces of the lenses.
The variables that are used to compute ray intensity are controlled by the “Intensity computation” combobox in the settings window for the Geometrical Optics interface. To compute a heat source using the energy lost by the rays, select “Using principal curvatures and ray power”.
The total power transmitted by each ray, Q, remains constant in nonabsorbing domains. In a homogeneous, absorbing domain, the power decays exponentially,
(4)
where k_0 is the freespace wave number of the ray.
In order to apply the power lost by the rays as a source term in the Heat Transfer in Solids interface, it is necessary to add a Deposited Ray Power node to the absorbing domains. This node defines a variable Q_{\textrm{src}} (SI unit: W/m^3) for the volumetric heat source due to ray attenuation in the selected domains. As the rays propagate through the lenses, they contribute to the value of Q_{\textrm{src}},
(5)
where Q_{j} (SI unit: W) is the power transmitted by the ray with index j, N_t is the total number of rays, and \delta is the Dirac delta function. In practice, each ray cannot generate a heat source term at its precise location because the rays occupy infinitesimally small points in space, whereas the underlying mesh elements have finite size, so the power lost by each ray is uniformly distributed over the mesh element the ray is currently in.
The following short animation illustrates how the heat source defined on domain mesh elements (top) is increased as the power transmitted by each ray (bottom) is reduced.
The temperature in the lens, T, can be computed by solving the heat equation,
(6)
where k is the thermal conductivity of the medium. A Heat Flux node is used to apply convective cooling at all boundaries that are exposed to the surrounding air,
(7)
As the temperature changes, it contributes a thermal strain term \epsilon_{\textrm{th}} to the total inelastic strain in the lenses. The thermal strain is defined as
(8)
where \alpha is the thermal expansion coefficient, T is the temperature of the medium, and T_{\textrm{ref}} is the reference temperature. The resulting displacement field \mathbf{u} is then computed by the Solid Mechanics interface.
If the power transmitted by the beam is very low, then the energy lost by the rays to their surroundings does not noticeably change the temperature of the medium. However, it is still possible for other phenomena, such as external forces and heat sources, to change the shape or temperature of the lenses.
In this case, it is necessary to first compute the displacement field and temperature in the domain, and then compute the ray trajectories. This is considered a unidirectional, or oneway, coupling because the temperature change and structural deformation can affect the ray trajectories, but not the other way around.
If the power transmitted by the beam is sufficiently large, then the dissipation of energy in an absorbing medium may generate enough heat to noticeably change the shape of the domain or the refractive index in the medium. In this case, the ray trajectories affect variables, such as temperature, that are defined on the surrounding domain, and these variables in turn affect the ray trajectories. This is considered a bidirectional, or twoway, coupling.
In this example, we assume that the laser is operating at constant power, so it is preferable to compute the temperature and displacement field using a Stationary study step. However, the ray trajectories are computed in the time domain.
To set up a bidirectional coupling between the ray trajectories and the temperature and displacement fields, we first create a Stationary study step to model the heating and deformation of the lenses, then add a Ray Tracing study step to compute the ray trajectories. Then, the corresponding solvers are enclosed within a loop using the For and End For nodes. The following image shows the solver sequence that is used to set up a bidirectional coupling between the ray trajectories and the temperature and displacement fields.
The nodes between the For and End For nodes are repeated a number of times that is specified in the settings window for the For node. Furthermore, every time a solver is run, it uses the solution from the previous solver. In this way, it is possible to set up a bidirectional coupling between the two studies and iterate between them until a selfconsistent solution is reached.
We now examine the ray trajectories close to the target for two cases: A 1watt beam and a 3,000watt beam.
For the 1watt beam, we observe that the focal point of the beam is extremely close to the target surface. The rays do not converge to a single point due to spherical aberration. For the 3,000watt beam, we see that the beam has already started to diverge by the time it reaches the target surface. The following image compares the deposited ray power at the target for the two cases.
Comparison of the deposited ray power for a 3,000watt beam (left) and 1watt beam (right). For comparison and visualization purposes, the color expression for deposited power has been normalized and plotted on a logarithmic scale.
The Geometrical Optics interface also includes builtin operators for evaluating the sum, average, maximum, or minimum of an expression over all rays. Using these operators, it is possible to quantify the beam width in a variety of ways. As shown in the plot below, the 3,000watt beam is focused more than 2 millimeters away from the target surface.
We have seen that the temperature change and the resulting thermal expansion in a highpowered laser system can significantly shift the focal point of the beam. With the Ray Optics Module, it is possible to take thermally induced focal shift into account when designing such systems.
To learn more about computing ray trajectories in thermally deformed lens systems, please refer to the Thermal Lensing in HighPower Laser Focusing Systems model.
]]>
COMSOL Multiphysics has three addon products for electromagnetic wave propagation: the Ray Optics Module, the Wave Optics Module, and the RF Module. Let’s take a look at the differences.
The RF Module and the Wave Optics Module both offer an Electromagnetic Waves, Frequency Domain interface, which solves the fullwave form of Maxwell’s equations via the finite element method (FEM). This requires a finite element mesh that is fine enough to resolve the electromagnetic waves, as shown in the figure below.
Fullwave simulation of scattering off of a metallic sphere. The variations in the magnitude of the electric field require a fine mesh everywhere.
This approach is appropriate when the solutions we are interested in have significant variations in all directions and are on a length scale comparable to the wavelength.
The Wave Optics Module also includes the Electromagnetic Waves, Beam Envelopes interface, which solves a modified version of the fullwave Maxwell’s equations, again via the finite element method. The Beam Envelopes formulation requires, as input, an approximate and slowly varying wave vector. Rather than solving for the electromagnetic fields themselves, this formulation solves for the slowly varying electric field amplitude.
Beam envelopes simulation of a directional coupler. The gradual variation in the field magnitude allows for a very coarse mesh in that direction.
The advantage of the Beam Envelopes formulation is that a very coarse mesh can be used in the direction of propagation. The limitation is that the wave vector field must be approximately uniform or slowly varying throughout the modeling domain. However, this is indeed the case for a range of important optical devices such as optical fibers or directional couplers.
The Ray Optics Module includes the Geometrical Optics interface, which treats electromagnetic waves as rays. It does not use the finite element method; instead, it traces the rays through the modeling domain by solving a set of ordinary differential equations for the position and wave vector. Although the domains through which the rays travel must be meshed, the mesh can be very coarse. Only at curved surfaces must the mesh be refined.
Geometrical optics simulation of a plane wave scattering from a cylinder. The ray intensity decreases after the rays are reflected by the curved surface, causing the waves to diverge. A very coarse mesh can be used, except on the curved boundaries.
The Ray Optics Module traces rays of light propagating through different media and can consider many different behaviors of the rays at boundaries. The wavelengthdependence of the refractive indices of the media can be considered. It is also possible to compute the intensity, the phase, and the polarization of light and how these vary as the ray goes through different media and across boundaries.
Let’s now take a deeper look at the various physical phenomena that can be modeled.
Refraction and reflection at a dielectric interface.
A ray of light propagating through a medium of uniform refractive index will travel in a straight line. When the ray encounters an interface between materials of different refractive indices, the ray will be partially reflected and partially refracted. This behavior is governed by Snell’s Law and the Fresnel equations and is handled automatically, by the Ray Optics Module, at interfaces between different materials.
A light ray bends as it passes through a graded index material.
Light propagating through a medium with a nonuniform refractive index will bend in the direction of a relatively higher refractive index. Such graded index behavior can be modeled simply by defining the refractive index as a smooth, spatially varying function. The Ray Optics Module inherits the powerful tools of COMSOL Multiphysics for creating spatially varying materials.
For instance, the Luneburg Lens example model available in the Model Library of the Ray Optics Module defines the refractive index simply as sqrt(x^2+y^2+z^2). Alternatively, you can define spatially distributed media as a lookup table from a file, or more spectacularly, as a function of another physics field quantity such as n = f(T(x,y,z)), where n is the refractive index, f is some function, and T(x,y,z) is a spatially varying temperature field computed by a heat transfer simulation in COMSOL Multiphysics. More on this in a blog post coming soon.
Specular (left) and diffuse (right) reflection of a ray of light.
At boundaries, the ray can propagate through unimpeded as if the boundary were completely transparent, it can be completely absorbed, or it can be reflected. Reflections occur at surfaces of materials through which light cannot pass and will be either specular, diffuse, or a mixture of the two. Specular reflection occurs on highly polished metal surfaces, whereas most other surfaces reflect more diffusely.
Reflection and transmission through a (possibly multilayer) thin dielectric film.
It is also possible to model structures composed of thin layers of different materials, such as dielectric mirrors or antireflective coatings. These can be modeled by adding one or more Thin Dielectric Film nodes to a boundary. The effective reflection and transmission coefficients through the multilayer stack are then computed without explicitly modeling each layer. This is demonstrated in the AntiReflective Coating, Multilayer model.
Reflection and transmission into various diffraction orders from an optical grating.
On the other hand, structures with periodic wavelengthscale variation in the plane of the boundary can be modeled with the Grating boundary condition. Diffraction gratings have periodic variations in their structure and can split and diffract a ray into several different rays, which are termed diffraction orders. It is also possible to compute the characteristics of the grating via the fullwave formulation and use this as an input, as demonstrated in the Diffraction Grating model.
The polarization of a ray of light changes as it goes through various optical elements.
Lastly, boundary conditions can be used to manipulate the polarization of the ray. Boundary conditions for simulating linear polarizers, linear and circular wave retarders, ideal depolarizers, and optical components with arbitrary Mueller matrices can be represented as boundary conditions. These conditions are demonstrated in the Linear Wave Retarder model.
The rays themselves can be launched into the model from domains, boundaries, and any userspecified points. The rays can have a spherical, hemispherical, or conical distribution. It is also possible to model illumination from the sun by specifying a position on Earth. Along with the path of the ray, the intensity, polarization, and phase can also be computed, if desired. This makes it possible to compute both optical intensity on surfaces and interference patterns. Examples of this include modeling a solar dish and computing the interference pattern of a Michelson interferometer.
The Ray Optics Module does not directly consider interactions with structures that have size comparable to the wavelength.
For example, consider a plane wave scattering off of a diamondshaped metallic object as shown below. If the wavelength is comparable to the object size, there will be significant diffraction around the object and the region behind it will get illuminated. Similarly, a plane wave incident upon a wavelengthscale slit will experience significant diffraction and broadening. Modeling either of these effects requires a fullwave approach using the Wave Optics Module or the RF Module.
A diamondshaped object scatters an electromagnetic wave in all directions (left). There is significant illumination behind the scatterer. A plane wave incident upon a slit (right) will spread out. The color in both plots indicates the electric field norm.
The Geometrical Optics approach, on the other hand, does not consider these diffractive phenomena. Rays representing a plane wave will be reflected specularly from the surfaces and will not illuminate the region behind the object. Rays passing through a slit will not spread out. These are both valid approximations if the wavelength of light is much smaller than the object’s size.
A diamondshaped object in a plane wave using the Geometrical Optics approach (left) and a plane wave passing through a slit (right) does not experience any diffraction.
Currently, the Ray Optics Module also does not consider refractive indices that are dependent upon the intensity of light. However, such problems can be addressed with the Beam Envelopes formulation in the Wave Optics Module, as demonstrated in the example of Self Focusing in BK7 Glass.
The complete capabilities of the Ray Optics Module are demonstrated by the Model Library examples, available within the software and on our online Model Gallery.
If you are interested in using the Ray Optics Module for any of your modeling needs, please contact us.
]]>
You’ve probably heard the word “graphene” in the news and here on the blog numerous times, usually with references to its powerful capabilities in advancing technology within various industries. It’s not every day that a material quite as unique and powerful as graphene comes along and it’s safe to say the world has taken notice.
The structure of graphene. Image by K. M. AlShurman and H. A. Naseem and is taken from the paper titled “CVD Graphene Growth Mechanism on Nickel Thin Films“.
Graphene has been a relevant topic on the minds of many, ourselves included. In a recent series of blog posts, we highlighted the revolution behind this material, from its exotic properties and production methods to simulating its use in various applications.
Our last post in the series emphasized research on the “wonder material” that led to the accidental discovery of 2D glass. While the discovery in itself is remarkable, the point on which I’d like to focus is how they actually grew the graphene used in the research — through chemical vapor deposition (CVD).
Chemical vapor deposition describes the chemical process designed to create solid materials that perform strongly and are highly pure. In this method, gas molecules are combined in a reaction chamber containing a heated substrate. The interaction between the gases and the heated substrate causes the gases to react and/or decompose on the substrate’s surface, thus producing a material film.
This synthesis method is particularly valued for its ability to produce materials that are rather high in quality. Compared to other coating methods, the resulting materials in chemical vapor deposition tend to possess greater purity, hardness, and resistance to agitation or damage. An additional advantage within this method is the wide range of materials that can be deposited, one of which is graphene.
Among synthesis techniques, chemical vapor deposition has proved promising in the development of highquality graphene films. The process involves growing graphene films on different kinds of substrate that utilize transition metals. One such example is nickel (Ni). This involves the diffusion of decomposed carbon atoms into nickel at a high temperature, followed by the precipitation of carbon atoms on the surface of the nickel during the cooling process.
Because of the multiplicity of the growth conditions in the CVD method, producing a singlelayer graphene and maintaining control over the quality of the graphene film can be very challenging. One research team from the University of Arkansas recognized the need to better understand the growth mechanism as well as optimal conditions for graphene production.
Using COMSOL Multiphysics, the researchers created a graphene synthesis model to analyze the dissolutionprecipitation mechanism for CVD graphene growth on nickel. In the study, they analyzed factors affecting the number of graphene layers synthesized, including growth time and temperature, rate of cooling, carbon solubility in nickel, and the nickel’s film thickness.
A schematic showing the mechanism for CVD graphene growth on Ni. Image by K. M. AlShurman and H. A. Naseem and is taken from the poster titled “CVD Graphene Growth Mechanism on Nickel Thin Films“.
In analyzing the diffusion of the carbon atoms, the team found that the greater the temperature within the Ni film, the more accelerated the diffusion process was. From their results, they also concluded that additional time was needed for carbon atoms to reach their saturated state in thicker Ni film.
Additionally, the researchers modeled supersaturation by cooling. In the supersaturation process, carbon atoms become segregated on the surface of the Ni thin film. When cooling the film from 900°C to 725°C, 1.7 layers of graphene were obtained on the film’s surface. This resulting number of graphene layers proved reasonable in comparison to experimental data.
Graph highlighting the number of layers produced when the Ni film is cooled from 900°C to 725°C. Image by K. M. AlShurman and H. A. Naseem and is taken from the presentation titled “CVD Graphene Growth Mechanism on Nickel Thin Films“.
Nonequilibrium cold plasmas are characterized by an electron temperature that is much higher than the gas temperature. During plasma modeling, the ion temperature is often set to equal the gas temperature. This is an acceptable approximation, as long as the ions undergo sufficient collisions with neutral gas molecules and then thermalize with the background gas. This is especially true in inductively coupled plasmas (ICP), where the pressure is low and the ions’ mean free path length comes closer to the plasma reactor’s length scale. Moreover, the number of collisions are low, therefore, the ion temperature is somewhere in between the gas and electron temperatures.
While COMSOL Multiphysics does not solve for the ion temperature, there are some options available for you to do so.
You can choose to set the ion temperature to equal the gas temperature or use a userdefined value or expression. Moreover, you can also elect to define a correlation between the electric field and the ion mobility and employ an Einstein relation to calculate it, using the Local Field Approximation (LFA) — available in the COMSOL software.
As mentioned, your choice in ion temperature (especially for lowpressure plasmas) could significantly impact your model’s results. Below, you will find a theoretical reason that helps explains this phenomenon.
For the heavy species transport (and ion transport), a continuity equation with a drift diffusion approximation is solved for each species. The variation of the mass fraction ,w_k, for species k depends on a flux, \mathbf{j}_k, and a reaction term, R_k. In this case, convection and thermal diffusion are neglected for simplicity:
To compute the flux, \mathbf j_k, a mixture averaged diffusion coefficient, D_{k,m}, and the ion mobility, \mu_{k,m}, are required:
Based on the kinetic theory of gases, binary diffusion coefficients, D_{kj}, are calculated to get the mixture averaged diffusion coefficient, D_{k,m}. You may have already noticed that LenardJones parameters, \sigma and \epsilon / k_B, have to be specified for each plasma species:
The ion mobility is then calculated, using an Einstein relation according to:
At the reactor walls, the ion flux, \mathbf j_k, to the wall, is computed according to:
The ion temperature is needed to compute the ion mobility and flux to the reactor walls, so the choice in ion temperature especially affects the ions’ transport properties within the plasma model. If the migration part of the flux is large, in comparison to the diffusion part, then the choice in ion temperature particularly grows in importance. This is notably true in cases at very low pressures or at high electric field strengths.
To reiterate, you can also compute the ion temperature with the help of the LFA, available in COMSOL Multiphysics.
The LFA assumes that the local velocity distribution of the particles is balanced with the local electric field. Therefore, quantities, like ion temperature or ion mobility, can be expressed in terms of (reduced) electric field. The LFA requires that local changes in the electric field are small in comparison to the mean free path length. However, this is not always true in the boundary layer, particularly.
The following expression, for the reduced electron mobility as a function of the reduced field, can be found in the paper “Twofluid modelling of an abnormal lowpressure glow discharge” and is used in a subsequent ICP example, below.
In the equation above, the reduced electric field,E/n, is given in Townsends (Td).
To demonstrate the impact your ion temperature choice has on an ICP model, let’s take a look at an example.
An inductively coupled plasma reactor (similar to the GEC ICP Reactor, Argon Chemistry model) was modeled three times with varying ion temperatures. Because ICPs work at particularly low pressures, the ion temperature choice has to be considered carefully.
The ion temperature was:
The other model parameters were as follows:
Model Parameters  

Gas Temperature  300 K 
Coil Power  500 W 
Pressure  0.02 torr 
Electron Mobility  4E24 (1/(m*V*s)) 
The mean ion temperature from Model 3, which was computed from D_{k,m} / \mu_{k,m}, amounts to 0.22 eV –, or 2515 K.
The following figures represent the electron density for all three models after 0.001 seconds.
Model 1: Electron density (T_ion = 300 K).
Model 2: Electron density (T_ion = 0.1 eV).
Model 3: Electron density (T_ion from LFA).
As seen above, using a higher ion temperature value significantly increases the electron density.
The modeling results are also compared in the table below. The maximum electron density, maximum electron temperature, and the absorbed power are displayed.
Max. Electron Density [1/m³]  Max. Electron Temperature [eV]  Resistive Losses [W]  

1. T_i = 300 \text K  4.3E17  4.1  387 
2. T_i = 0.1 \text {eV}  2.6E18  2.8  407 
3. Local Field Approximation  3.3E18  2.3  41 
Based on the table, we can deduce that increasing the ion temperature not only leads to a significant increase in electron density, but also the absorbed power. Additionally, the electron temperature noticeably decreases.
The example above illustrates the impact the choice in ion temperature has on the modeling results of an ICP. A comparison of the results with literature values is essential in judging which assumptions give the best outcomes.
]]>
During a period of time in which energy efficiency and sustainability are heavily emphasized, magnetic cooling has found its way into new technologies, from industrial to household applications. Based on the magnetocaloric effect, this cooling technology involves the phenomenon in which a temperature of a magnetocaloric material is altered by exposing it to an applied magnetic field. This applied field causes the magnetic dipoles to align, resulting in an increase in temperature. The removal of this magnetic field causes the atoms to become disorganized and the material then cools.
With continued research on the optimization of this technology, the potential to reduce energy consumption in homes and offices across the world has become a more realistic goal. This left one group of researchers wondering if this same method could be used to address another source of high energy consumption — heating and air conditioning in electric vehicles.
Examples of electric cars (“Ride and Drive EVs Plug’n Drive Ontario” by Mariordo. Licensed under Creative Commons AttributionShare Alike 2.0 via Wikimedia Commons).
Using COMSOL Multiphysics, a team of researchers from the National Institute of Applied Science designed a magnetocaloric HVAC system for an electric vehicle.
These vehicles rely on energy from batteries for heating and air conditioning, just as they do for operation. The level of energy required is furthered by the vehicle’s lack of available heat waste from the thermal engine, which makes it easier to heat the internal space of conventional vehicles. The additional need for cooling to prevent overheating in the vehicle’s battery further contributes to the energy usage, while highlighting the importance of adequate cooling systems.
In their design, the researchers used a 2D model to analyze an active magnetic regenerator refrigeration cycle for a magnetic refrigeration system. In this case, the magnetocaloric regenerator was comprised of thin parallel plates, with microchannels featuring heat transfer fluid alternating in between.
The geometry of an active magnetic regenerator. Image by A. Noume, C. Vasile, and M. Risser and is taken from the presentation titled “Modeling of a Magnetocaloric System for Electric Vehicles“.
As a means to optimize the efficiency of the system, the team simulated the behavior of the magnetocaloric regenerator coupled with the circulating fluid. They particularly focused on the convective heat transfer coefficient connected to the heat transfer between magnetocaloric material and coolant — an especially important parameter in the overall design.
During the refrigeration cycle simulation, researchers analyzed the hot and coldend temperature variation. The temperature span — the difference between the maximum and minimum temperature — was found to be around 8 K. Adding new materials and alloys was recognized as a potential method of optimizing thermal properties in future designs of these systems.
The results of this study provide a valuable foundation for the use of magnetic cooling technology in electric vehicles. Both rooted in the quest for lower energy consumption, the combination of these two innovative technologies could greatly enhance the autonomy of electric vehicles and make magnetic cooling more mobile.
Bell Labs, the research arm of AlcatelLucent, is committed to designing and implementing new technologies for significantly improving energy management for the next generation of telecommunications products. Working to meet this goal, Bell Labs founded the GreenTouch consortium, a leading organization of researchers dedicated to reducing the carbon footprint of information and communications technology. It is the goal of GreenTouch and Bell Labs to demonstrate the key components needed to increase network energy efficiency by a factor of 1,000 compared to 2010 levels.
The Thermal Management and Energy Harvesting & Storage Research Group within the Efficient Energy Transfer (ηET) Department at Bell Labs (led by Dr. Domhnaill Hernon) is one such group working towards this goal. The department focuses on two main areas. One focus of the thermal research group is to deliver gamechanging thermal technology into AlcatelLucent products across all scales and across multiple disciplines ranging from reliable active air cooling to single and multiphase liquid cooling. One way that this is done is through research for improving the thermal management of laser light transmission in photonic devices by developing an approach to achieve a 50 to 70 percent reduction in energy usage per bit. This approach is explored here.
In addition, the department performs research into Alternative Energy and Storage solutions to enable power autonomous deployment of wireless sensors and small cell technology. In this blog post, we will focus on the production of an energy harvesting device used to power wireless sensors that can produce up to 11 times more energy than current approaches.
In order to improve energy efficiency in photonic devices, the Thermal Management department is using multiphysics simulation to model new designs for cooling photonic devices, which rely on the thermoelectric effect for cooling. Photonic devices used for communications contain a thermoelectric material that is used to cool the device.
In these materials, a temperature difference is created when an electric current is applied to the material, resulting in one side of the material heating up and the other side cooling down. When thermoelectric materials are used to control the temperature of photonics devices, the system is known as a thermally integrated photonics system (TIPS). Currently, a large thermoelectric (TEC) cooler is used to cool off the entire system within the photonic device. While TECs can be used for precise temperature control, they are highly inefficient. The group’s new approach improves thermal management by using an individual micro TEC (μTEC) to cool down each laser in the device.
Schematic of the thermally integrated photonics system (TIPS) architecture, which includes microthermoelectric and microfluidic components.
Using COMSOL Multiphysics, the team simulated a TIPS architecture to be used in new laser devices, including the electrical, optical, and thermal performance of the device. In addition to cooling, these devices are used by telecommunication laser devices in order to maintain the correct output wavelength, output optical power, and data transmission rates.
The team investigated temperature control and heat flux management in the integrated TIPS and μTEC architecture using simulation. In particular, they investigated how temperature control can be archived in these systems through the integration of μTECs with the semiconductor laser architectures. The simulation of the integrated laser and μTECs can be seen in the image below, on the right.
Multiphysics simulation of a laser with an integrated μTEC where temperature (surface plot), current density (streamlines), and heat flux (surface arrows) are shown.
Another project currently being conducted by Bell Labs is the design of an energy harvesting device that can convert ambient vibrations from motors, AC, and HVAC into usable energy. This would prevent the need for the replacement of batteries used in wireless sensors frequently utilized across the network. Applications for this new design include the monitoring of energy usage in large facilities, and in sensors for the future Internet of Things (IoT).
The team’s design used the principles of the conservation of momentum and velocity amplification to convert vibrations into electricity using electromagnetic induction. The device uses a novel approach with multiple degrees of freedom to amplify the velocity of the smallest mass in the system. Simulation played a big part in the design, as parametric sweeps allowed the team to determine how different structural, electrical, and magnetic parameters would affect one another and the design as a whole. The figure below shows the novel design (left) along with the simulation of the device (right).
Left: Prototype of novel machinedspring energy harvester. Right: Simulation of the energy harvester, showing von Mises stress.
Although these new designs are not yet available on the market, researchers at Bell Labs believe that because of the accuracy achieved through their simulations, the devices should be ready for commercial production in as little as five years. Whereas previously these designs would have taken years of timeconsuming physical testing, the Bell Labs team anticipates that these devices will be available with a much faster timetomarket, thanks to the use of multiphysics simulation.
For more detail, read the full story “Meeting HighSpeed Communications Energy Demands Through Simulation“, which appeared in Multiphysics Simulation magazine.
]]>
Piezoelectric materials become electrically polarized when strained. From a microscopic perspective, the displacement of charged atoms within the crystal unit cell (when the solid is deformed) produces a net electric dipole moment within the medium. In certain crystal structures, this combines to give an average macroscopic dipole moment and a corresponding net electric polarization. This effect, known as the direct piezoelectric effect, is always accompanied by the inverse piezoelectric effect, in which the solid becomes strained when placed in an electric field.
Several material properties must be defined in order to fully characterize the piezoelectric effect within a given material. The relationship between the material polarization and its deformation can be defined in two ways: the straincharge or the stresscharge form. Different sets of material properties are required for each of these equation forms.
To complicate things further, there are two standards used in the literature: the IEEE 1978 Standard and the IRE 1949 standard, and the material properties take different forms within the two standards. IEEE actually revised the 1978 standard in 1987, but this version of the standard contained a number of errors and was subsequently withdrawn. Confused yet? I was when I first started reading the literature!
Today’s blog post describes in detail the different equation forms and standards, with a focus on the particular case of quartz — the material that causes the most confusion. In both academia and industry, the quartz material properties are commonly defined within the older 1949 IRE standard. Meanwhile, other materials are now almost always defined using the 1978 IEEE standard. To make matters worse, it is not common to indicate which standard is being employed when specifying the material properties.
The coupling between the structural and electrical domains can be expressed in the form of a connection between the material stress and its permittivity at constant stress or as a coupling between the material strain and its permittivity at constant strain. The two forms are given below.
The straincharge form is written as:
where S is the strain, T is the stress, E is the electric field, and D is the electric displacement field. The material parameters s_{E}, d, and ε_{rT} correspond to the material compliance, coupling properties, and relative permittivity at constant stress. ε_{0} is the permittivity of free space. These quantities are tensors of rank 4, 3, and 2, respectively. The tensors, however, are highly symmetric for physical reasons. They can be represented as matrices within an abbreviated subscript notation, which is usually more convenient. In literature, the Voigt notation is almost always used.
Within this notation, the above two equations can be written as:
\left(
\begin{array}{l}
S_{xx} \\
S_{yy} \\
S_{zz} \\
S_{yz} \\
S_{xz} \\
S_{xy} \\
\end{array}
\right)
=
\left(
\begin{array}{llllll}s_{E11} & s_{E12} &s_{E13} &s_{E14} &s_{E15} &s_{E16}\\
s_{E21} & s_{E22} &s_{E23} &s_{E24} &s_{E25} &s_{E26}\\
s_{E31} & s_{E32} &s_{E33} &s_{E34} &s_{E35} &s_{E36}\\
s_{E41} & s_{E42} &s_{E43} &s_{E44} &s_{E45} &s_{E46}\\s_{E51} & s_{E52} &s_{E53} &s_{E54} &s_{E55} &s_{E56}\\s_{E61} & s_{E62} &s_{E63} &s_{E64} &s_{E65} &s_{E66}\\\end{array}
\right)\left(
\begin{array}{l}T_{xx} \\
T_{yy} \\
T_{zz} \\
T_{yz} \\
T_{xz} \\
T_{xy} \\
\end{array}
\right)
+
\left(
\begin{array}{lll}
d_{11} & d_{21} & d_{31} \\
d_{12} & d_{22} & d_{32} \\
d_{13} & d_{23} & d_{33} \\
d_{14} & d_{24} & d_{34} \\
d_{15} & d_{25} & d_{35} \\
d_{16} & d_{26} & d_{36} \\
\end{array}
\right)
\left(
\begin{array}{l}
E_{x} \\
E_{y} \\
E_{z} \\
\end{array}
\right)
\\
\left(
\begin{array}{l}
D_{x} \\
D_{y} \\
D_{z} \\
\end{array}
\right)
=
\left(
\begin{array}{llllll}
d_{11} & d_{12} &d_{13} & d_{14} & d_{15} & d_{16}\\
d_{21} & d_{22} &d_{23} & d_{24} & d_{25} & d_{26}\\
d_{31} & d_{32} &d_{33} & d_{34} & d_{35} & d_{36}\\
\end{array}
\right)\left(
\begin{array}{l}
T_{xx} \\
T_{yy} \\
T_{zz} \\
T_{yz} \\
T_{xz} \\
T_{xy} \\
\end{array}
\right)
+
\epsilon_0 \left(
\begin{array}{lll}
\epsilon_{rT11} & \epsilon_{rT12} & \epsilon_{rT13} \\
\epsilon_{rT21} & \epsilon_{rT22} & \epsilon_{rT23} \\
\epsilon_{rT31} & \epsilon_{rT32} & \epsilon_{rT33} \\
\end{array}
\right)
\left(
\begin{array}{l}
E_{x} \\
E_{y} \\
E_{z} \\
\end{array}
\right)
\\
\end{array}
The stresscharge form is as follows:
The material parameters c_{E}, e, and ε_{rS} correspond to the material stiffness, coupling properties, and relative permittivity at constant strain. ε_{0} is the permittivity of free space. Once again, these quantities are tensors of rank 4, 3, and 2 respectively, but can be represented using the abbreviated subscript notation.
Using the Voigt notation and writing out the components gives:
\left(
\begin{array}{l}
T_{xx} \\
T_{yy} \\
T_{zz} \\
T_{yz} \\
T_{xz} \\
T_{xy} \\
\end{array}
\right)
=
\left(
\begin{array}{llllll}c_{E11} & c_{E12} &c_{E13} &c_{E14} &c_{E15} &c_{E16}\\
c_{E21} & c_{E22} &c_{E23} &c_{E24} &c_{E25} &c_{E26}\\
c_{E31} & c_{E32} &c_{E33} &c_{E34} &c_{E35} &c_{E36}\\
c_{E41} & c_{E42} &c_{E43} &c_{E44} &c_{E45} &c_{E46}\\c_{E51} & c_{E52} &c_{E53} &c_{E54} &c_{E55} &c_{E56}\\c_{E61} & c_{E62} &c_{E63} &c_{E64} &c_{E65} &c_{E66}\\\end{array}
\right)\left(
\begin{array}{l}S_{xx} \\
S_{yy} \\
S_{zz} \\
S_{yz} \\
S_{xz} \\
S_{xy} \\
\end{array}
\right)
+
\left(
\begin{array}{lll}
e_{11} & e_{21} & e_{31} \\
e_{12} & e_{22} & e_{32} \\
e_{13} & e_{23} & e_{33} \\
e_{14} & e_{24} & e_{34} \\
e_{15} & e_{25} & e_{35} \\
e_{16} & e_{26} & e_{36} \\
\end{array}
\right)
\left(
\begin{array}{l}
E_{x} \\
E_{y} \\
E_{z} \\
\end{array}
\right)
\\
\left(
\begin{array}{l}
D_{x} \\
D_{y} \\
D_{z} \\
\end{array}
\right)
=
\left(
\begin{array}{llllll}
e_{11} & e_{12} &e_{13} & e_{14} & e_{15} & e_{16}\\
e_{21} & e_{22} &e_{23} & e_{24} & e_{25} & e_{26}\\
e_{31} & e_{32} &e_{33} & e_{34} & e_{35} & e_{36}\\
\end{array}
\right)\left(
\begin{array}{l}
S_{xx} \\
S_{yy} \\
S_{zz} \\
S_{yz} \\
S_{xz} \\
S_{xy} \\
\end{array}
\right)
+
\epsilon_0 \left(
\begin{array}{lll}
\epsilon_{rS11} & \epsilon_{rS12} & \epsilon_{rS13} \\
\epsilon_{rS21} & \epsilon_{rS22} & \epsilon_{rS23} \\
\epsilon_{rS31} & \epsilon_{rS32} & \epsilon_{rS33} \\
\end{array}
\right)
\left(
\begin{array}{l}
E_{x} \\
E_{y} \\
E_{z} \\
\end{array}
\right)
\\
\end{array}
The matrices defined in the above equations are the key material properties that need to be defined for a piezoelectric material. Note that for many materials, a number of the elements in each of the matrices are zero and several others are related, as a result of the crystal symmetry.
Using the international notation for describing crystal symmetry, the symmetry group of quartz is Trigonal 32. The nonzero matrix elements take different values within different standards, which can result in confusion when specifying the material properties for a simulation, especially for quartz, where two different standards are commonly employed.
Finally, there is another complication in the case of quartz. Quartz crystals do not have symmetry planes parallel to the vertical axis. Correspondingly, they occur in two types: left or righthanded (this is known as enantiomorphism). Each one of these enantiomorphic forms results in different signs for particular elements in the material property matrices.
The material property matrices appropriate for quartz and other Trigonal 32 materials are shown below. Note that the symmetry relationships between elements in the matrix hold irrespective of the standard used or whether the material is right or lefthanded.
\left(
\begin{array}{cccccc}
c_{E11} & c_{E12} &c_{E13} & c_{E14} & 0 & 0\\
c_{E12} & c_{E11} &c_{E13} & c_{E14} &0 & 0\\
c_{E13} & c_{E13} &c_{E33} & 0 & 0 & 0\\
c_{E14} & c_{E14} & 0 & c_{E44} & 0 & 0 \\
0 & 0 & 0 & 0 & c_{E44} & c_{E14}\\
0 & 0 & 0 & 0 & c_{E14} & \frac{1}{2}\left(c_{E11}c_{E12}\right)\\
\end{array}
\right)
&
\left(
\begin{array}{cccccc}
s_{E11} & s_{E12} &s_{E13} & s_{E14} & 0 & 0\\
s_{E12} & s_{E11} &s_{E13} & s_{E14} &0 & 0\\
s_{E13} & s_{E13} &s_{E33} & 0 & 0 & 0\\
s_{E14} & s_{E14} & 0 & s_{E44} & 0 & 0 \\
0 & 0 & 0 & 0 & s_{E44} & 2 s_{E14}\\
0 & 0 & 0 & 0 & 2 s_{E14} & 2\left(s_{E11}s_{E12}\right)\\
\end{array}
\right)
\\
\left(
\begin{array}{cccccc}
e_{11} &e_{11} & 0 & e_{14} & 0 & 0 \\
0 & 0 & 0 & 0 & e_{14} & e_{11}\\
0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}
\right)
&
\left(
\begin{array}{cccccc}
d_{11} & d_{11} & 0 & d_{14} & 0 & 0 \\
0 & 0 & 0 & 0 & d_{14} & 2d_{11} \\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}
\right)
\\
\left(
\begin{array}{ccc}
\epsilon_{rS11} & 0 & 0 \\
0 & \epsilon_{rS11} & 0 \\
0 & 0 & \epsilon_{rS33} \\
\end{array}
\right)
&
\left(
\begin{array}{ccc}
\epsilon_{rT11} & 0 & 0 \\
0 & \epsilon_{rT11} & 0 \\
0 & 0 & \epsilon_{rT33} \\
\end{array}
\right)
\\
\end{array}
Having defined a set of material properties in terms of matrices that operate on the different components of the stress or the strain in the x,y,z axes system, all that remains is to define a consistent set of axes to use when writing down the material properties.
Correspondingly, all of the standards define a consistent set of axes for each of the relevant crystal classes. Unfortunately, in the particular case of quartz, subsequent standards have not used the same sets of axes, and the adoption of the most recent standard has not been widespread. Therefore, it is important to understand exactly which standard a given set of material properties is defined in.
The two relevant standards are:
The orientation of the axes set with the crystal can be determined by specifying the orientation with respect to the atoms in the unit cell of the crystal (which is not that helpful in practice) or by specifying the orientation with respect to the crystal forms. A crystal form is a set of crystal faces or planes that are related by symmetry. Particular forms commonly appear in crystal specimens found in rocks and are used to identify different minerals.
The Quartz Page has a series of helpful figures for identifying the common crystal forms, termed m, r, s, x, and z, as well as a further page specifying the Miller indices of the corresponding planes. Since the standards typically use crystal forms to orientate the axes, this approach is adopted in the figure below, which shows the two axes sets that relate to the 1978 and 1949 standards. Note that both left and righthanded quartz are shown in the figure.
Crystallographic axes defined for quartz within the 1978 IEEE standard (solid lines) and the 1949 standard (dashed lines). Click on the image to view a larger version.
As a result of the different crystal axes, the signs of the material properties for both right and lefthanded quartz can change depending on the particular standard employed. The table below summarizes the different signs that occur for the quartz material properties:
IRE 1949 Standard 
IEEE 1978 Standard 


Material Property 
RightHanded Quartz 
LeftHanded Quartz 
RightHanded Quartz 
LeftHanded Quartz 
s_{E14} 
+ 
+ 
– 
– 
c_{E14} 
– 
– 
+ 
+ 
d_{11} 
– 
+ 
+ 
– 
d_{14} 
– 
+ 
– 
+ 
e_{11} 
– 
+ 
+ 
– 
e_{14} 
+ 
– 
+ 
– 
Usually, piezoelectrics, such as quartz, are supplied in thin wafers that have been cut at a particular angle, with respect to the crystallographic axes. The orientation of a piezoelectric crystal cut is frequently defined by the system used in both the 1949 and 1978 standards. The orientation of the cut, with respect to the crystal axes, is specified by a series of rotations, using notation that takes the form illustrated below:
Diagram showing how a GT cut plate of quartz is defined in the IEEE 1978 standard. The crystal shown is righthanded quartz.
The first two letters of the notation given in the brackets describe the orientation of the thickness and length of the plate that is being cut from the crystal. From the figure on the left, it is clear that the thickness direction (t) is aligned with the Yaxis and the length direction (l) is aligned with the Xaxis. The plate also has a third dimension, its width (w). After the first two letters, a series of rotations are defined about the edges of the plate.
In the example above, the first rotation is about the laxis, with an angle of 51°. The negative angle means that the rotation takes place in the opposite direction to a righthanded rotation about the axis. Finally, an additional rotation about the resulting taxis is defined, with an angle (in a righthanded sense) of 45°.
Most practical cuts use one or two rotations, but it is possible to have up to three rotations within the standard, allowing for completely arbitrary plate orientations.
Note that since the crystallographic axes are defined differently in the 1949 and the 1978 standards, the crystal cut definitions differ between the two. A common cut for quartz plates is the AT cut, which is defined in the two standards in the following manner:
Standard 
AT Cut Definition 

1949 IRE 
(YXl) 35.25° 
1978 IEEE 
(YXl) 35.25° 
The figure below shows how the two alternative definitions of the AT cut correspond to the two alternative definitions of the axes employed in the standards.
The AT cut of quartz is defined as (YXl) 35.25° in the IRE 1949 standard and (YXl) 35.25° in the IEEE 1978 standard. The figure shows the cut defined in a righthanded crystal of quartz. The reason for the difference between the standards is related to the different conventions for the orientation of the crystallographic axes. In the IRE 1949 standard, the rotation occurs in a positive or righthanded sense about the laxis (which in this case is aligned with the Xaxis). As a result of the different axes set employed in the IEEE 1978 standard, the rotation corresponds to a negative angle in this standard.
We have now seen how the two different standards result in different definitions of the material properties and different definitions of the crystal cuts.
In a followup blog post, we will explore how to set up a COMSOL Multiphysics model using the two standards. COMSOL Multiphysics provides material properties for quartz using both of the available standards, so it is possible to set up a model using whichever standard you are most familiar with. Stay tuned for that.
]]>
Although it is possible to set up and solve a 3D model of a conical horn antenna, such a model would require a relatively large amount of computational resources to solve. We can solve for the electromagnetic fields much more quickly by exploiting the symmetry of the structure. Because we are dealing with a cone, our model is structurally symmetric around its axis, i.e., it’s axisymmetric.
Now, although the structure is axisymmetric, the electromagnetic fields will have some variation around the azimuth of the axis, that is, the fields have an azimuthal variation. The RF Module and the Wave Optics Module allow you to model axisymmetric structures with different azimuthal mode numbers.
We can exploit this feature; by building a 2D axisymmetric model and solving for several different azimuthal mode numbers, we can build a model that solves much quicker and uses less memory than a full 3D model. I like the sound of that. But first, a quick note on horn antennas.
There are various types of horn antennas in terms of both overall shape and pattern of the inside. These attributes determine the antenna’s beam profile, bandwidth, and crosspolarization.
Crosspolarization means that the electromagnetic fields are polarized opposite to what is intended. For example, we want the fields to be polarized vertically, but they are instead polarized horizontally.
The funnel part of the antenna is connected to a waveguide, which feeds electromagnetic waves into the antenna. The shape of the horn will dictate what application it’s suited for. For example, sectoral horns (labeled b and c in the image below) are typically used for wide search radar antennas.
Various horn antenna shapes: a) pyramidal; b) sectoral, Eplane; c) sectoral, Hplane; d) conical; e) exponential. “Horn antenna types” by Chetvorno — Own work. Licensed under Creative Commons Zero, Public Domain Dedication via Wikimedia Commons.
The antenna in our case is both shaped like a cone (labeled d in the image above) and has a corrugated surface inside; it’s a corrugated conical horn antenna fed by a circular waveguide. The waveguide passes the excited TE mode through the corrugated funnel, which, in turn, generates a TM mode. Due to the corrugated surface throughout the cone, the modes are mixed, leading to a lower crosspolarization at the aperture than the original excited TE mode.
Conical horn antenna: A visualization in 3D based on a 2D axisymmetric model. The waveguide feeds the antenna with the TE_{1m} mode (m = ±1), which mixes with the TM_{1m} mode as it propagates through the antenna.
Above, I mentioned what crosspolarization is, but why would we want to reduce it? Well, if we have a lot of crosspolarization, our signal may interfere with other channels nearby, if they have alternating vertical and horizontal polarization. We wouldn’t want that.
To investigate the crosspolarization, we can use COMSOL Multiphysics and the RF Module to set up a model. As we learned earlier, we can save time by solving this as a 2D axisymmetric model instead of in 3D. We can do that by using the Electromagnetic Waves, Frequency Domain interface.
I will skip over the stepbystep model setup and head straight to the fun stuff — the results. If you want to reproduce the plots shown here, feel free to download the model documentation and MPHfile from the Model Gallery.
First, we can see what the directive beam pattern of the antenna is:
Farfield plot: The directive beam pattern of the antenna.
Next, we can analyze the electric field at the antenna’s entrance and exit. By solving the model for both m = +1 and m = 1, we can compare the linear polarization in the x and ydirection at the exit.
Electric field at the entrance and exit of the antenna for the linear superposition of m = +1 and m = 1.
At the waveguide feed, the field is mostly in the xdirection, but not linearly polarized. At the aperture, the field is very nearly linearly polarized. To quantify the polarization in both directions, we can evaluate the integral of the absolute value of each field component over the conical horn antenna’s entrance and exit. Doing so, we’ll find that the ratio is roughly 5:1 at the entrance and about 40:1 at the exit. In other words, we have reduced the crosspolarization by approximately a factor of 8.
The Electrical showcase, called Designing and Modeling Electrical Systems and Devices, is a resource created by COMSOL applications engineers and developers alike to demonstrate the modeling capabilities of COMSOL Multiphysics in a comprehensive and resourceoriented guide.
In the showcase, you’ll find information about the six modules offered by COMSOL specifically designed for simulating such diverse applications as transformers, electronic packaging, the propagation of waves in and around structures, analyzing microwave devices and antennas, and much more. The showcase introduces the philosophy of the COMSOL software by demonstrating the various ways in which you can use it to perform detailed simulations with realworld accuracy. The showcase is divided into sections to display this functionality most effectively; you’ll find sections on Joule and Induction Heating, Optics and Photonics, and Plasma physics, just to name a few. Explore these and other areas by selecting from the available categories.
When exploring the showcase, and the COMSOL Product Suite in general, the philosophy of the multiphysics approach to modeling will become apparent. We developed the Product Suite to allow you to conduct fullycoupled analyses of applications involving multiple physics in one and the same simulation environment. For electrical engineers, an example of this modeling approach is the design of a power transmission line, where heat transfer, structural mechanics, and electromagnetics all come into play.
In a power transmission line, operating temperatures can affect loadcarrying capabilities and even the protective coating that surrounds electrical cables. As a result of the heat produced as current flows through the conductor, the system temperature rises and the electrical and thermal material properties of the conductor change. The interaction between these physics can change their expected behavior, altering, for example, the currentcarrying capabilities of the conductor as well as the durability of the cable’s protective coating.
In this application, the electrical and thermal effects are interdependent and strongly coupled. Therefore, the simulation must couple the physics the same way they are in the real world. The objective is to find a selfconsistent solution that satisfies all physics. This is the key strength of COMSOL Multiphysics, where accurate analyses are accomplished through the use of a truly multiphysics code that allows engineers to apply an unlimited number of physics analyses in a single simulation.
The Designing and Modeling Electrical Systems and Devices resource shows you more about this approach to multiphysics modeling and how you can employ it in your own simulations.
Once you have finished exploring the different application areas, you can contact COMSOL experts to ask any additional questions you might have. A short form is provided at the end of the showcase that will put you directly in contact with our application engineers.
Explore all the free resources mentioned above and learn how multiphysics modeling can help improve your R&D process at www.comsol.com/showcase/electrical.
]]>
Frequency selective surfaces (FSS) are periodic structures that function as filters for plane waves, such as microwave frequency waves. These structures can transmit, absorb, or reflect different amounts of radiation at varied frequencies. Typically, they have a bandstop or bandpass frequency response.
Frequency selective surfaces are used in a variety of applications. For example, the article “Picking the Pattern for a Stealth Antenna” from COMSOL News 2013 describes how engineers at Altran used FSS as RF filters to reduce the radar cross sections (RCS) of stealth antennas. In the article, designers employed FSS to reduce antenna gain in order to lower the RCS. There, the FSS were designed to absorb incident radiation, rather than scatter it. FSS surfaces are typically constructed with metallic patterns that are arranged periodically. Complementary split ring resonators can be used to build such structures.
As a type of planar resonator, complementary split ring resonators are primarily used to simulate metamaterial elements. When designing a bandpass structure, for example, they can be arranged periodically. Modeling these resonators in a periodic configuration can become quite complex and timeconsuming. However, you can overcome these design challenges by implementing periodic boundary conditions into your model.
COMSOL Multiphysics, together with the RF Module, enables you to model a periodic complementary split ring resonator with ease by utilizing perfectly matched layers and periodic boundary conditions. As an example, we can refer to the Frequency Selective Surface, Periodic Complementary Split Ring Resonator model, which is available in our Model Gallery.
In this model, a split ring slot is patterned on a thin copper layer (which is modeled as a perfect electric conductor) that rests on a PTFE substrate that is 2 mm in thickness.
A single unit cell of the complementary split ring resonator. The model is created with periodic boundary conditions.
To simulate a 2D infinite array, as shown above, you can model just one unit cell of the complementary split ring resonator. This is done using Floquetperiodic boundary conditions on each of the four sides of the unit cell.
To learn how to do this, check out the model documentation, where stepbystep instructions are provided.
While this post mostly focuses on how you can save time modeling using periodic boundary conditions, this particular model’s documentation goes into further detail regarding the periodic structure’s bandpass frequency response in terms of Sparameters, as shown below.
Sparameter plot showing the periodic structure functions as a bandpass filter near 4.6 GHz.
The electron energy distribution function (EEDF) is essential in plasma modeling because it is needed to compute reaction rates for electron collision reactions. Because electron transport properties can also be derived from the EEDF, the choice of the EEDF you use influences the results of the plasma model. If the plasma is in thermodynamic equilibrium, the EEDF has a Maxwellian shape. In most plasmas, for technical purposes, deviations from the Maxwellian form occur.
To describe the EEDF, several possibilities are available, such as a Maxwell or Druyvesteyn function. In addition, a generalized form is available, which is an intermediate between the Maxwell and the Druyvesteyn function.
Maxwell 
f(\epsilon)=\varphi^{3/2}\beta_1\exp\left(\frac{\epsilon\beta_2}{\varphi}\right)
\beta_1=\Gamma(5/2)^{3/2}\Gamma(3/2)^{5/2},\ \beta_2=\Gamma(5/2)\Gamma(3/2)^{1}

Druyvesteyn 
f(\epsilon)=\varphi^{3/2}\beta_1\exp\left(\left(\frac{\epsilon\beta_2}{\varphi}\right)^2\right)
\beta_1=\Gamma(5/4)^{3/2}\Gamma(3/4)^{5/2},\ \beta_2=\Gamma(5/4)\Gamma(3/4)^{1}

Generalized 
f(\epsilon)=\varphi^{3/2}\beta_1\exp\left(\left(\frac{\epsilon\beta_2}{\varphi}\right)^g\right)
\beta_1=\Gamma(5/2g)^{3/2}\Gamma(3/2g)^{5/2},\ \beta_2=\Gamma(5/2g)\Gamma(3/2g)^{1}

Here, ϵ is the electron energy, (eV); \varphi is the mean electron energy, (eV); and g is a factor between 1 and 2. For a Maxwell distribution function, g is equal to 1, while g equals 2 for a Druyvesteyn distribution. Lastly, \Gamma is the incomplete Gamma function.
As the Druyvesteyn EEDF is based on a constant (electron energy independent) cross section, the Maxwellian EEDF is based on constant collision frequency. The distribution functions assume that elastic collisions dominate, thus the effect of inelastic collisions (e.g., excitation or ionization) on the distribution function is insignificant. In such a case, the distribution function becomes spherically symmetric. In elastic collisions with neutral atoms, the electrons’ direction of motion is changed, but not their energies (due to the large mass difference).
If the electrons are in thermodynamic equilibrium among each other, the distribution function is Maxwellian. However, this is only true if the ionization degree is high. Here, electronelectron collisions drive the distribution towards a Maxwellian shape. Inelastic collisions of electrons with heavy particles lead to a drop of the EEDF at higher electron energies. Therefore, a Druyvesteyn distribution function often gives more accurate results for a lower ionization degree.
Furthermore, the EEDF can be computed by solving the Boltzmann equation. The Boltzmann equation describes the evolution of the distribution function, f, in a sixdimensional phase space:
To solve the Boltzmann equation and, therefore, compute the EEDF, drastic simplifications are necessary. A common approach is to expand the distribution function in spherical harmonics. The EEDF is assumed to be almost spherically symmetric, so the series can be truncated after the second term (a socalled twoterm approximation). This approach is the most accurate way to compute the EEDF because an anisotropic perturbation, due to inelastic collisions, is taken into account. However, this is also the most computationally expensive approach.
Maxwellian EEDF in eV^{–1} for mean electron energies from 2 — 10 eV.
Normally, the distribution function is divided by \sqrt{\epsilon} for illustration purposes. This kind of distribution function is also known as an electron energy probability function (EEPF). For a Maxwellian function, this results in a straight line with a slope of (1/k_B T), as shown below.
Maxwellian EEPF in eV^{–3/2} for mean electron energies from 2 — 10 eV.
A Druyvesteyn distribution has the maximum and mean energy shifted to higher values. The highenergy tail drops much faster in comparison to a Maxwellian distribution. As the electrons reach enough energy for excitation or ionization, elastic collisions occur. This leads to the fall of the Boltzmann distribution function, which is observed below.
Comparison of a Maxwell, Druyvesteyn, and Boltzmann distribution function.
Mean electron energy 5 eV, electron density 1\cdot10^{16}\ \mathrm{m}^{3}, ionization degree 1\cdot10^{9}
In the plasma model, the EEDF is needed to compute the rate coefficients, k_k, for the electron collisions reactions, according to the equation:
In the above equation, \gamma = \sqrt{2q/m_e}, the electron energy is ϵ, and \sigma_k is the cross section for reaction, k.
The rate coefficients for excitation and ionization highly depend on the shape of the EEDF. This is due to the exponential dropoff in the population of electrons at energies exceeding the activation threshold. Using a Maxwellian EEDF can lead to an overestimation of the ionization rate, which is shown below.
Rate coefficients for argon ionization computed with different kinds of EEDFs.
Furthermore, the electron transport properties can be computed by means of the EEDF, using the Boltzmann Equation, TwoTerm Approximation interface. The computed transport coefficients have less dependence on the type of EEDF.
Reduced electron mobility computed with different kinds of EEDFs.
As the rate coefficients can differ by orders of magnitude, we must be aware that the discharge characteristics also change drastically when changing the EEDF. In the Plasma Module Model Library, there is a model of dielectric barrier discharge (DBD). This model simulates electrical breakdown in atmospheric pressure argon. The model was recomputed with the three different kinds of EEDFs, and we compared the results. The next two figures show the total current at the grounded electrode and the instantaneously absorbed power in the plasma. The plasma is driven with a sinusoidal voltage with a frequency of 50 kHz. The figures show the behavior over two periods.
Total discharge current in the DBD vs. time.
Absorbed power of the DBD vs. time.
The results look quite similar. So, the choice of the EEDF influences the modeling results, not orders of magnitude, but in this case much less than a factor of two. This, of course, depends on the model and the specific results you wish to extract.
]]>