Every year, the International Microwave Symposium (IMS) brings together researchers and engineers from around the world, giving them a preview of what’s up and coming in the RF and microwave industries. While attending last year’s event, my colleague Jiyoun Munn noted that there was a lot of buzz surrounding 5G and the Internet of Things (IoT). Throughout the science and engineering community, this sentiment has rang true, with various research in the works to advance the potential of these innovative concepts.

I recently had a chance to speak with Jiyoun about his experience attending IMS2016. During our discussion, he mentioned a technology that was referenced as a point of focus in the RF and microwave industries in the year ahead, one that could have large implications: *autonomous vehicles*.

*A self-driving car with U.S. Secretary of State John Kerry inside. Image in the public domain, via Wikimedia Commons.*

Fully autonomous cars would essentially change all aspects of driving. People with disabilities that prevent them from driving could now travel on their own. Those who already drive, meanwhile, would have more time to dedicate to other tasks, while using less gas and staying safer.

Self-driving cars would also have a significant impact on our roads and highways. Faster reactions, for instance, would reduce the need for greater distance between vehicles as well as overall traffic. This, in turn, could enable more cars to be on the road and allow for higher speed limits. Further, since these cars would automatically stick to the speed limit, police officers would no longer have to patrol highways, giving them more time to focus on other types of crime.

But what does it mean for a car to be fully autonomous? The National Highway Traffic Safety Administration has set out guidelines for classifying automation levels in vehicles. You can find a summary of these different levels highlighted below.

The Society for Automative Engineers (SAE) has put together a similar list, which you can view here. SAE’s standards include an additional level for cars that can drive only in specific situations, but are able to maintain control if the human driver doesn’t respond appropriately to a request to intervene.

While some semiautonomous cars are already on the road, there are still challenges that must be overcome before fully autonomous vehicles can become a reality. Let’s have a look…

Addressing the reliability and security of a fully autonomous vehicle presents various challenges. While laws don’t necessarily exclude these cars, they don’t directly address them either. So far, only 16 states have introduced legislature regarding autonomous vehicles. For autonomous cars to move from the testing phase to widespread use, more states have to pass laws that not only permit them on the road, but also account for who is liable in the case of an accident.

There’s also the question of privacy as the software needed to automate cars would have access to a lot of information about their drivers. Anything from their preferred coffee shops to their location at any given time could be stored. While this is convenient for the driver, the car companies would have access to such data as well. How can this information be used? It’s a question that the companies and lawmakers must decide on.

There are several challenges to consider as well when it comes to the reliability and security of the car’s software. Autonomous vehicles run on complicated software, which, like any other electronic device, can crash. Unlike other devices, this would put the driver in a dangerous and potentially life-threatening situation. The software itself can also be hacked, which could lead to anything from cars being held for ransom to the hackers remotely driving vehicles.

For the actual design of autonomous cars, companies are using some combination of LIDAR, radar, and stereo cameras to “see”. LIDAR provides a 360° view using lasers and is the most precise of the three technologies, but the sensor can be easily tricked and its accuracy is compromised in weather such as heavy rain, snow, and fog. Radar and cameras are more limited in scope, but the first measures relative speed and range, while the second recognizes when objects are moving laterally in front of a vehicle.

*An image taken by LIDAR, showing the road contour, elevation, and vegetation. Image by Oregon Department of Transportation — Own work. Licensed under CC BY 2.0, via Flickr Creative Commons.*

Along with the sensors, autonomous vehicles also need a GPS to navigate to the driver’s intended location. While we use this technology on a regular basis, a GPS depends on its map’s accuracy. If you’ve ever had one tell you to turn the wrong way on a one-way street or to use a park’s walkway as a road, you can imagine what might happen if cars relied solely on this system. Additionally, we need a GPS that provides real-time updates to account for closed roads and construction.

Most importantly, though, is that these cars actually reach Level 4 automation. A car this advanced could distinguish between potholes and plastic bags as well as know to avoid one and ignore the other. If a pedestrian suddenly moves into the street, the car must decide whether to brake or swerve out of the way. Making the jump to Level 4 means developing software that encompasses all possible driving situations and then, once this software is in place, considering how to reduce the expenses of such vehicles.

While a built-in chauffeur is still a long ways away, there have been many significant advancements in autonomous technology over the years. Cars are now equipped with the ability to automatically brake, maintain speeds, stay centered in a lane, and park themselves. With so many companies in the midst of designing autonomous cars, there is great promise that this technology will continue to advance until full autonomy is achieved.

With a better understanding of how the technology works, multiphysics simulation can serve as a powerful tool for advancing the design of autonomous vehicles. Sensors, for instance, are an important piece of the puzzle. Through modeling, we could test changes to the sensor’s design and placement in the vehicle, determining the configuration and location that delivers the optimal performance. We look forward to the continued improvements in autonomous vehicle design and simulation’s potential in helping to further its performance and capabilities.

- To further explore the role of simulation in automotive design, check out the following blog posts:
- Browse these resources to learn more about the advancement of 5G and IoT:

The two simulation methods that we will discuss in today’s blog post are the asymptotic waveform evaluation (AWE) and frequency-domain modal methods. Both are designed to help you overcome the conventional issue of a longer simulation time when using a very fine frequency resolution or running a very wide band simulation. The AWE is quite efficient when it comes to describing smooth frequency responses with a single resonance or no resonance at all. The frequency-domain modal method, meanwhile, is useful for quickly analyzing multistage filters or filters of a high number of elements that have multiple resonances in a target passband.

For our purposes, it would be too technical to talk about the numerical characteristics of the asymptotic waveform evaluation (AWE) — a reduced-order modeling technique. Instead, we will go over how to use this method in the RF Module. The solver performs fast-frequency adaptive sweeping using an AWE that is configured in the *Frequency Domain* study settings, shown below.

Study Extensions *section of* Frequency Domain *study settings.*

When the *Use asymptotic waveform evaluation* check box is selected, it will trigger an AWE solver. By default, the solver uses Padé approximations.

The AWE method is very useful when simulating resonant circuits, especially bandpass-filter type devices with many frequency points. For instance, the Evanescent Mode Cylindrical Cavity Filter tutorial model, available in the Application Library, sweeps the simulation frequency between 3.45 GHz and 3.61 GHz with a 5 MHz frequency step.

*The Evanescent Mode Cylindrical Cavity Filter tutorial model (left) and its discrete frequency sweep results (right). The S-parameter plot does not look smooth around the resonant frequency.*

Say you run the simulation again with a much finer frequency resolution, such as a 100 kHz frequency step that is 50 times finer. You can expect that the simulation will take 50 times longer to finish. When using the AWE option in this particular example model, the simulation time is almost the same as the regular frequency sweep case, but we can obtain all of the computed solutions on the dependent variable with the 100 kHz frequency step.

The simulation time also varies with regards to the user input in the AWE expressions. Any model variable works as an AWE expression, so long as it generates a smooth resulting plot like a Gaussian pulse or a smooth curve as a function of frequency. The absolute value of S21 **(abs(emw.S21))**, for example, works as the input for the AWE expression in the case of a two-port bandpass filter. For one-port devices like antennas, S11 still works. If the frequency response of the AWE expression contains an infinite gradient — the case for the S11 value of an antenna, with excellent impedance matching at a single frequency point — the simulation will take longer to complete.

*AWE expression using S-parameters (S21) for a two-port filter simulation.*

Following the 100 kHz frequency step simulation, the solutions contain a ton of data. As a result, the model file size will increase tremendously when it is saved. When only S-parameters are of interest, a common theme in most passive RF and microwave device designs, it is not necessary to store all of the field solutions. By selecting the *Store fields in output* check box in the *Values of Dependent Variables* section, we can control the part of the model on which the computed solution is saved. We only add the selection containing these boundaries where the S-parameters are calculated. The lumped port size is typically very small compared to the entire modeling domain, and the saved file size with the AWE is more or less that of the regular discrete frequency sweep model when only the solutions on the port boundaries are stored.

*Settings window for the lumped port boundary (left) and the explicit selection generated by the lumped port (right).*

It is easy to add an explicit selection when setting up the lumped port. When you specify the lumped port boundary selection, click the *Create Selection* button. This will add an explicit selection with the boundary you just added for the lumped port. By repeating the same step for the other port, you will obtain all of the selections to use for storing only those results that you need to plot the S-parameters.

Values of Dependent Variables *section of the* Frequency Domain *study settings.*

In the *Values of Dependent Variables* section, change the selection in the *Store fields in output* combo box from *All* to *For selections*. You can then add the explicit selections created from the lumped ports.

Now you are ready to run the AWE frequency sweep. Don’t forget to use the finer frequency step in the study settings. You can do so in one of two ways: Directly type in the step you want, or click the *Range* button next to the input field to use the *Range* dialog box.

*Updating the simulation frequency step via the* Range *dialog box.*

Once the simulation is complete, you will notice that the simulation time for the AWE frequency sweep with a much finer step is almost the same as the discrete sweep. Let’s compare the computed S-parameters. Since the AWE performed a frequency sweep that was 50 times finer, its frequency response (S-parameters) plot consequently looks much nicer. Not only do you save precious time with this approach, but as the plot below illustrates, you also still obtain accurate and good-looking results.

*S-parameter plot of the AWE and discrete frequency sweep simulations.*

Bandpass-frequency responses of a passive circuit result from a combination of multiple resonances. Eigenfrequency analysis is key to capturing the resonance frequencies of an arbitrary shape of a device. Once we obtain all of the necessary information from the eigenfrequency analysis, we can reuse it in the frequency-domain modal study. Doing so enables us to optimize the efficiency of the simulation when a finer frequency resolution is required to more accurately describe the frequency response, as illustrated in the AWE method.

To perform a frequency-domain modal analysis seamlessly, there are two important simulation steps to keep in mind. The first of these steps involves refining the eigenfrequency study results. The output of the eigenfrequency study is purely numerical and even includes nonphysical results. By using the manual eigenfrequency searching method, those unwanted, low-frequency residues can be filtered out. The manual eigenfrequency searching process is constrained by a series of items: *Eigenfrequency search method around shift*, *Desired number of eigenfrequencies*, and *Search for eigenfrequencies around*. For the last item, the lowest passband frequency works as a ballpark value.

*Manual search method in the* Eigenfrequency *settings.*

The second step involves using the **linper** operator on the excitation voltage. The frequency-dependent modal method requires using a **linper** operator as a load on the dependent variable of a model. This operator can be used by the excitation voltage input of a lumped port in the *Electromagnetic Wave, Frequency Domain* physics interface.

*The linper operator is used in the voltage input of the lumped port settings.*

To try this out, let’s take a look at the Coupled Line Filter tutorial model, available in our Application Library. You’ll want to add a new study with the eigenfrequency and frequency-domain modal study steps, configuring the settings for each study step as described above. Repeat this again with a frequency step that is 50 times finer. The *Store fields in output* check box in the *Values of Dependent Variables* section can also be applied to the frequency-domain modal study — if you are interested only in S-parameters, this is the way to go. By storing solutions only on the lumped port boundary, it is possible to further reduce simulation time.

*S-parameter plot of the regular frequency sweep and frequency-domain modal simulations.*

*S-parameter plot of the regular frequency sweep and frequency-domain modal simulations with a frequency step that is 50 times finer.*

Note that the eigenfrequency analysis contains a lumped port that impacts the simulation as an extra loading factor, so the phase of the computed S-parameters is different from that of the regular frequency sweep model. The results are compatible only with phase-independent S-parameter values such as dB-scaled, absolute value, reflectivity, or transmittivity.

The simulation methods presented in today’s blog post are powerful tools for enabling faster, more efficient modeling of passive RF and microwave devices. While these methods have been available since version 5.2, our latest release — COMSOL Multiphysics® version 5.2a — features polished Application Library examples that help further guide you in how to utilize these techniques. You can find such examples highlighted below:

- Asymptotic Waveform Evaluation method
- RF Module > Passive Devices > cylindrical cavity filter evanescent

- Frequency-Domain Modal method
- RF Module > Passive Devices > cascaded cavity filter
- RF Module > Passive Devices > coupled line filter
- RF Module > Passive Devices > cpw bandpass filter

Interested in learning about other improvements and upgrades in version 5.2a? Head over to our Release Highlights page for additional details.

]]>

As we discussed in a previous blog post, there are many required advancements and design considerations for the 5G mobile network. One of the improvements that RF engineers need to work toward is increasing antenna gain to serve the much higher frequencies on which 5G will operate.

*An isotropic low-gain antenna used in older networks versus a directive high-gain antenna used for 5G.*

Another requirement of the 5G mobile network is that we improve phase progression technology. This shapes the radiation pattern and steers the beam of an antenna array to control the input signal and address angular coverage issues.

*A monopole phased antenna array can steer a beam toward the desired direction.*

One device, a slot-coupled microstrip patch antenna array, can be incorporated into designs to address these coverage issues. However, there are many complex design parameters that must be considered in order to build a device that is optimized for 5G wireless communications.

Simulation can help by offering the capability to evaluate and implement physical effects that can not be easily tested in a design lab or through prototyping, such as extreme temperature variation, structural deformation, and chemical reactions. Unfortunately, not every engineer working on a design may be an expert in simulation, requiring the simulation expert on the team to be involved in every step of the design process whenever there are new changes to an antenna array design or simulation environment.

The Application Builder addresses these difficulties by further enhancing the capabilities of simulation. Now, an otherwise complex and tedious numerical model of an RF design can be turned into an interactive and user-friendly tool for experts and end users alike. Today, let’s explore the Slot-Coupled Microstrip Patch Antenna Array Synthesizer simulation app and how it can help us optimize phased array antenna designs for 5G and IoT.

Active electronically scanned arrays (AESA), also known as phased antenna arrays, are conventionally used in the military for radar and satellite applications. These arrays are also used in a new application — commercial purposes — due to the growing needs of higher data rates in communication devices. The size of this simple component can easily exceed tens of a wavelength, causing simulation design to be very memory intensive. As a result, computation takes a very long time, even when only approximated values are needed to evaluate a proof-of-concept model. Faster prototyping would help to analyze performance and determine design parameters quickly.

The Slot-Coupled Microstrip Patch Antenna Array Synthesizer is based on a full finite element method (FEM) model of a single microstrip patch antenna built on low-temperature cofired ceramic (LTCC) layers. The device initially operates at 30 GHz, and the radiation pattern and directivity analysis of the entire array structure are integrated using the very powerful postprocessing functionality of COMSOL Multiphysics. The Application Builder works as a shortcut to provide a variety of ways to design and build the user-friendly graphical user interface (GUI), transforming an ordinary mathematical model into an intuitive simulation tool.

*The top view of a slot-coupled mictrostrip patch antenna.*

The Application Builder offers two essential tools for creating our app: the Form Editor and Method Editor. The Form Editor enables us to design the GUI with simple functionality by adding form objects to a custom interface. The Method Editor helps to implement more advanced and customized functions over the form objects. After an accurate simulation of a single microstrip patch antenna, we find the two-dimensional antenna array factor

\frac{sin(\frac{N_xk_0S_xsin\theta cos\phi +\alpha_x}{2})}{sin(\frac{k_0S_xsin\theta cos\phi +\alpha_x}{2})}\frac{sin(\frac{N_yk_0S_ysin\theta sin\phi +\alpha_y}{2})}{sin(\frac{k_0S_ysin\theta sin\phi +\alpha_y}{2})}

This corresponds to user input, such as array size; arithmetic phase progression; and angular resolution, which is imposed on the single antenna radiation pattern data (`emw.normEfar`

).

The Method Editor moves beyond a simple simulation, where visualization is limited to the predefined postprocessing variables, and allows for further customization.

*A preview of the main form to show form objects.*

*Using the Method Editor to create custom actions for form objects.*

In this app, there are many design parameters that we can test for our microstrip patch antenna array design, including:

- Antenna properties
- Patch size
- Substrate size
- Slot size
- Feed line width
- Extended feed line length
- Patch substrate thickness and relative permittivity
- Feed substrate thickness and relative permittivity

- Array properties
- Array size
- Phase progression
- Spacing

- Simulation properties
- Frequency
- Wavelength
- 3D plot resolution
- Polar plot resolution

The array dimension, phase progression and spacing, and the distance between each element primarily characterize the shape and direction of the array antenna radiation pattern. The angular resolution also adds a delicate touch on the 3D and 2D radiation pattern visualization. Note that when the antenna directivity is higher, using a finer resolution will more accurately describe the sidelobes.

*The GUI of the Slot-Coupled Microstrip Patch Antenna Array Synthesizer app.*

After the analysis, the app reports whether the single antenna design parameters are optimal by using the computed S-parameter (S11) value compared to the pass/fail target criterion that the app user specifies before running the simulation. The app depicts the electric field distribution on each dielectric and metallic layer and also visualizes the entire view of the array to give app users a better feel for the performance of the design. You can also choose to include a complete simulation result report and documentation that concisely explains how the app works.

There are infinite ways to convert your model into a customized tool by using the Application Builder, but what’s next? You can launch and use your simulation app with the core COMSOL Multiphysics® software. As long as you have an internet connection, you can run your app using a common web browser and even deploy it to colleagues and customers via COMSOL Server™ product.

In the Application Gallery, even more apps are available for you to download and explore, covering physics areas such as electrical, mechanical, fluid, chemical, and more. These demo apps can serve as a guide for you to build useful apps of your own.

*The Frequency Selective Surface Simulator demo app (left) and the Plasmonic Wire Grating Simulator demo app (right).*

Whether you are building an app to enhance RF designs for the 5G network, or working in another application area, get started building simulation apps and optimizing your design workflow and product performance today.

- Try it yourself: Download the Slot-Coupled Microstrip Patch Antenna Array Synthesizer
- Read more about RF design applications for 5G and the Internet of Things on the COMSOL Blog:
- Watch a video to learn how to start building simulation apps from your own models

Both of these interfaces solve the frequency-domain form of Maxwell’s equations, but they do it in slightly different ways. The *Electromagnetic Waves, Frequency Domain* interface, which is available in both the RF and Wave Optics modules, solves directly for the complex electric field everywhere in the simulation. The *Electromagnetic Waves, Beam Envelopes* interface, which is available solely in the Wave Optics Module, will solve for the complex envelope of the electric field for a given wave vector. For the remainder of this post, we will refer to the *Electromagnetic Waves, Frequency Domain* interface as a *Full-Wave* simulation and the *Electromagnetic Waves, Beam Envelopes* interface as a *Beam-Envelope* simulation.

To see why the distinction between *Full-Wave* and *Beam-Envelope* is important, we will begin by discussing the trivial example of a plane wave propagating in free space, as shown in the image below. We will then apply the lessons learned to the dielectric slab.

*A graphical representation of a plane wave propagating in free space, where the red, green, and blue lines represent the electric field, magnetic field, and Poynting vector, respectively.*

To properly resolve the harmonic nature of the solution in a *Full-Wave* simulation, we need to mesh finer than the oscillations in the field. This is discussed further in these previous blog posts on tools for solving wave electromagnetics problems and modeling their materials. To simulate a plane wave propagating in free space, the number of mesh elements will then scale with the size of the free space domain in which we are interested. But what about the *Beam-Envelopes* simulation?

The *Beam-Envelopes* method is particularly well-suited for models where we have good prior knowledge of the wave vector, . Practically speaking, this means that we are solving for the fields using the *ansatz* . Notice that the only unknown in the ansatz is the envelope function . This is the quantity that needs to be meshed to obtain a full solution, hence the mention of *beam envelopes* in the name of the interface. In the case of a plane wave in free space, the form of the ansatz matches exactly with the analytical solution. We know that the envelope function will be a constant, as shown by the green line in the figure below, so how many mesh elements do we need to resolve the solution? Just one.

*The electric field and phase of a plane wave propagating in free space. In the field plot (left), the blue and green lines show the real part and absolute value of E(r), which are and , respectively. The phase plot (right) shows the argument of E(r). In both plots, the *x*-axis is normalized to a wavelength, so this represents one full oscillation of the wave.*

In practice, *Beam-Envelopes* simulations are more flexible than the ansatz we just used. This is for two reasons. First, instead of specifying a wave vector, we can specify a user-defined phase function, . Second, there is also a bidirectional option that allows for a second propagating wave and a full ansatz of . This is the functionality that we will take advantage of in modeling the dielectric slab (also called a Fabry-Pérot etalon).

The points discussed here will come up again in the dielectric slab example, and so we highlight them again for clarity. The size of mesh elements in a *Full-Wave* simulation is proportional to the wavelength because we are solving directly for the full field, while the mesh element size in a *Beam-Envelopes* simulation can be independent of the wavelength because we are solving for the envelope function of a given phase/wave vector. You can greatly reduce the number of mesh elements for large structures if a *Beam-Envelopes* simulation can be performed instead of a *Full-Wave* simulation, but this is only possible if you have prior knowledge of the wave vector (or phase function) everywhere in the simulation. Since the degrees of freedom, memory used, and simulation time all depend on the number of mesh elements, this can have a large influence on the computational requirements of your simulation.

Using the 2D geometry shown below, we can clearly see the different waves that need to be accounted for in a simulation of a dielectric slab illuminated by a plane wave. On the left of the slab, we have to account for the incoming wave traveling to the right, as well as the reflected wave traveling to the left. Because of internal reflections inside the slab itself, we have to account for both left- and right-traveling waves in the slab, and finally, the transmitted waves on the right. We also choose a specific example so that we can use concrete numbers.

Let’s make the dielectric slab an undoped silicon (Si) wafer that is 525 µm thick. We will simulate the response to terahertz (THz) radiation (i.e., submillimeter waves), which encompasses wavelengths of approximately 1 mm to 100 µm and is increasingly used for classifying semiconductor properties. The refractive index of undoped Si in this range is a constant n = 3.42. We choose the domain length to be 15 mm in the direction of propagation.

*The simulation geometry. Red arrows indicate incident and reflected waves. The left and right regions are air with n = 1 and the Si slab in the center has a refractive index n = 3.42. The x _{i}s on the bottom denote the spatial location of the planes. The slab is centered in the simulation domain, such that x_{1} = (15 mm – 525 µm)/2. Note that this image is not to scale.*

For a 2D *Full-Wave* simulation, we set a maximum element size of to ensure the solution is well resolved. The simulation is invariant in the *y* direction and so we choose our simulation height to be . Because we have constrained the wave to travel along the *x*-axis, we choose a mapped mesh to generate rectangular elements. The mesh will then be one mesh element thick in the *y* direction, with a mesh element size in the *x* direction of , where n depends on whether it is air or Si. Again, note that this is a wavelength-dependent mesh.

Before setting up the mesh for a *Beam-Envelopes* simulation, we first need to specify our user-defined phase function. The Gaussian Beam Incident at the Brewster Angle example in the Application Gallery demonstrates how to define a user-defined phase function for each domain through the use of variables, and we will use the same technique here. Referring to x_{0}, x_{1}, and x_{2} in the geometry figure above, we define the phase function for a plane wave traveling left to right in the three domains as

\phi\left(\mathbf{r}\right) = k_0\cdot\left(x-x_0\right)

\phi\left(\mathbf{r}\right) = k_0\cdot\left(\left(x_1-x_0\right) + n\cdot\left(x-x_1\right)\right)

\phi\left(\mathbf{r}\right) = k_0\cdot\left(\left(x_1-x_0\right) + n\cdot\left(x_2-x_1\right) + \left(x-x_2\right)\right)

where n = 3.42 and the first line corresponds to in the leftmost domain, the second line is in the Si slab, and the bottom line is in the rightmost domain. We then use this variable for the phase of the first wave, and its negative for the phase of the second wave. Because we have completely captured the full phase variation of the solution in the ansatz, this allows a mapped mesh of only *three* elements for the entire model — one for each domain. Let’s examine what the mesh looks like in the Si slab for these two interfaces at two different wavelengths, corresponding to 1 mm and 250 µm.

*The mesh in the Si (dielectric) slab. From left to right, we have the *Full-Wave* mesh at 1 mm, the *Full-Wave* mesh at 250 µm, and the *Beam-Envelopes* mesh at any wavelength. Note that the *Full-Wave* mesh density clearly increases with decreasing wavelength, while the *Beam-Envelopes* mesh is a single rectangular element at any wavelength.*

Yes, that is the correct mesh for the Si slab in the *Beam-Envelopes* simulation. Because the ansatz matches the solution exactly, we only need three total elements for the entire simulation: one for the Si slab and one each for the two air domains on either side of it. This is independent of wavelength. On the other hand, the mesh for the *Full-Wave* simulation is approximately four times more dense at = 250 µm than at = 1 mm. Let’s look at this in concrete numbers for the degrees of freedom (DOF) solved for in these simulations.

Wavelength Simulated |
Full-Wave SimulationDOF Used |
Beam-Envelopes SimulationDOF Used |
---|---|---|

1 mm | 4,134 | 74 |

250 µm | 16,444 | 74 |

*The number of degrees of freedom (DOF) used at two different wavelengths for the *Full-Wave* and* Beam-Envelopes* simulations.*

Again, it is important to point out that this does not mean that one interface is better or worse than another. They are different techniques and choosing the appropriate option is an important simulation decision. However, it is fair to say that a *Full-Wave* simulation is more general, since we did not need to supply it with a wave vector or phase function. It can solve a wider class of problems than *Beam-Envelopes* simulations, but *Beam-Envelopes* simulations can greatly reduce the DOF when the wave vector is known. As we have seen in a previous blog post, memory usage in a simulation strongly depends on the number of DOF. Do not blindly use a *Beam-Envelopes* simulation everywhere though! Let’s take a look at another example where we intentionally make a bad choice for the wave vector and see what happens.

In the hypothetical free space example above, we chose a unidirectional wave vector. Here, we will do the same for the Si slab. It is important to emphasize that choosing a single wave vector where we know that the solution will be a superposition of left- and right-traveling waves is an exceptionally bad choice, and we do this here solely for demonstration purposes. Instead of using the bidirectional formulation with a user-defined phase function, let’s naively choose a single “guess” wave vector of and see what the damage is. Using our ansatz, inside of the dielectric slab we have

\mathbf{E}\left(\mathbf{r}\right)e^{-j\mathbf{k_G}\cdot\mathbf{r}} = \mathbf{E_1}e^{-j\mathbf{k}\cdot\mathbf{r}} + \mathbf{E_2}e^{j\mathbf{k}\cdot\mathbf{r}}

where the left-hand side is the solution we are computing and the right-hand side is exact. Now, we manipulate the equation slightly to examine the spatial variation in the solution.

\mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1}e^{-j\left(\mathbf{k-k_G}\right)\cdot\mathbf{r}} + \mathbf{E_2}e^{j\left(\mathbf{k+k_G}\right) \cdot\mathbf{r}}

We intentionally chose the case where , which means we can simplify to

\mathbf{E}\left(\mathbf{r}\right) = \mathbf{E_1} + \mathbf{E_2}e^{j\left(\mathbf{k+k_G}\right)\cdot\mathbf{r}}.

Since and are constants determined by the Fresnel relations at the boundaries of the dielectric slab, this means that the only spatial variation in the computed solution will come from . The minimum mesh requirement in the slab is then determined by the “effective” wavelength of this oscillating term

\lambda_{eff} = \frac{2\pi}{\left|\mathbf{k+k_G}\right|} = \frac{2\pi}{2\left|\mathbf{k}\right|} = \frac{\lambda}{2}

which is half of the original wavelength. Not only have we made the *Beam-Envelopes* mesh wavelength dependent, but the required mesh in the dielectric slab for this choice of wave vector needs to be twice as dense as the mesh for a *Full-Wave* simulation. We have actually made the situation worse with the poor choice of a single wave vector for a simulation with multiple reflections. We could, of course, simply double the mesh density and obtain the correct solution, but that would defeat the purpose of choosing the *Beam-Envelopes* simulation in the first place. Make smart choices!

Another practical question is how do the results of a *Full-Wave* and *Beam-Envelopes* simulation compare? They are both solving Maxwell’s equations on the same geometry with the same material properties, and so the various results (transmission, reflection, field values) agree as you would expect. There are slight differences though.

If you want to evaluate the electric field of the right-propagating wave in the dielectric slab, you can do that in the *Beam-Envelopes* simulation. This is, of course, because we solved for both right- and left-propagating waves and obtained the total field by summing these two contributions. This could be extracted from the *Full-Wave* simulation in this case as well, but it would require additional user-defined postprocessing and may not be possible in all cases. It may seem counterintuitive in that we actually have *more* information readily available from a *Beam-Envelopes* simulation, even though it is computationally less expensive. We must remember, however, that this is simply the result of solving the model using the ansatz we specified initially.

We have examined the simple case of a dielectric slab in free space using both the *Electromagnetic Waves, Frequency Domain* and *Electromagnetic Waves, Beam Envelopes* interfaces. In comparing *Full-Wave* and *Beam-Envelopes* simulations, we showed that a *Beam-Envelopes* simulation can handle much larger simulations, but only in cases where we have good knowledge of the wave vector (or phase function) everywhere in the simulation. This knowledge is not required for a *Full-Wave* simulation, but the simulation must then be meshed on the order of a wavelength, as opposed to meshing the change in the envelope function in a *Beam-Envelopes* simulation. It is also worth mentioning that most *Beam-Envelopes* meshes will need more than the three elements shown here. This was only possible here because we chose a textbook example with an analytical solution to use as a teaching model. For more realistic simulations, you can refer to the Mach-Zehnder Modulator or Self-Focusing Gaussian Beam examples in the Application Gallery.

Note that the *Electromagnetic Waves, Frequency Domain* interface is available in both the RF and Wave Optics modules, although with slightly different features. The *Full-Wave* simulation discussed in this post could be performed in either module, although the *Beam-Envelopes* simulation requires the Wave Optics Module. For a full list of differences between the RF and Wave Optics modules, you can refer to this specification chart for COMSOL Multiphysics products.

- Browse the COMSOL Blog for more discussions of electrical modeling
- Watch these videos:
- Take your electromagnetics modeling to the next level at a local training event
- Contact us with questions about your own model

5G is expected to be available for use by the year 2020, surpassing 4G LTE, 4G, 3G, and the networks before it. A new wireless network is supposed to be developed every ten years and with less than four years to go, we need to continue developing the technology needed to make 5G a reality for consumers. The ideal wireless network is constantly improving, and these expected advancements describe the 5G network, as well as the future networks that will follow.

You probably own a smartphone that you use multiple times throughout the day. According to recent data from the Pew Research Center, 64% of Americans own a smartphone of some kind, and about two thirds of these smartphone owners access the internet using their phones. In fact, the numbers even suggest that younger smart-device owners use their phones for text messaging and internet access statistically more than they do for making traditional phone calls. In a survey published in April 2015, 97% of 18- to 29-year-olds used their cellphones to go online, while 93% used their devices to make a call. These survey results mark a shift in the primary use of cellphones, creating a need for more data to keep the increasingly crowded mobile communications highway up and running.

*Smartphone users browse the internet on their devices for everyday tasks like looking up bus routes and times. Image by Metropolitan Transportation Authority of the State of New York — Bus Time Manhattan Launch. Licensed under CC BY 2.0, via Wikimedia Commons.*

Industry leaders in mobile communications have agreed on a list of specific requirements for the 5G network that cater to a new age in mobile communications. As mentioned above, the high use of mobile data has prompted a need for ultrahigh download rates, with almost no latency. Although you could argue that your current download rate is fast, for 5G, it must be close to instantaneous. Because of this increase in data needs, 5G should also have minimized signal traffic to handle the heavy load of mobile users, which is always increasing. The ideal wireless network should additionally provide reliable service *everywhere*. This means that you will be able to have fast and clear service in rural and desolate areas, not just urban and suburban locations, as is the case with 4G and 4G LTE.

5G-compatible devices themselves should be relatively low cost and consume less energy so that they have a longer battery life than current devices. The way you charge your phone may also change. Some smartphones already have the ability to charge without a power cord — they are placed on a power base and charged through induction by transferring power wirelessly between the base and the phone. More advanced charging options are also being investigated, such as longer-distance wireless power transfer. In this case, a power base transmits small signals to charge a device from within a certain distance. Instead of plugging your phone into a wall, or even placing it on a base, you may soon be able to charge your phone while it sits in your pocket. Such advancements may be integrated into certain 5G technologies.

*A wireless charging station transfers power to mobile devices. Image by Veredai from Powermat Technologies — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons.*

5G must be able to facilitate communication between connected *Internet of Things* (IoT) devices. Sources predict that by 2020, the year in which we should be able to support 5G, there will be 25 billion connected devices in our world. 5G needs to be able to handle the enormous data traffic of this web of connected smart devices, as well as that from the large amount of cellphone users worldwide. Besides the high volume of data traffic, 5G further requires enough speed to quickly analyze the data collected from IoT devices for practical use by consumers.

As an engineer working toward the development of the 5G network, you understand that there are many design elements to consider. The RF capabilities of COMSOL Multiphysics can shed light on this burgeoning technology.

Currently, our wireless network cannot handle the amount of data we download — at least not at an optimal speed. To achieve the required data rate to handle the download speeds of 5G, our wireless network needs to operate in a wider frequency range. The frequency spectrum for today’s wireless network is around 1 GHz to 3 GHz. 5G not only has to operate at a frequency above 6 GHz, but it has to be able to handle a span of up to 100 GHz.

For 5G, a frequency of around 30 GHz may serve as the backbone for the mobile communication network, meaning it will mostly operate around this range. The need for frequencies of up to 100 GHz is more complementary, for reasons such as extra capacity for the system and wider bandwidths for denser areas. You will recognize the importance of this requirement if you have ever tried to access the internet on your device while at a music concert or sporting event, where thousands of other people are trying to do the same thing. A complementary frequency range will also help for surges of use in natural disasters, when many people are trying to contact loved ones, and provide the extra “push” of service to more remote areas that may have never had decent mobile service before.

A *diplexer*, one of the many components that will be used in 5G mobile systems, can help play a role in improving this issue. Diplexers split signals into two different frequency ranges that are designed for the wide range required for the mobile network. Within a diplexer, a lower frequency “listens”, or receives a signal, while the higher frequency “talks”, or transmits a signal. Simulation is a simple way to test different iterations of a diplexer design to determine the best settings. By studying the S-parameters and electric field of a waveguide diplexer, which we can compute through a simulation-based approach, we can see if the design will work well with the 5G mobile network.

*A simulated waveguide diplexer.*

In the WR-28 waveguide diplexer model shown above, which is for Ka-band applications, a lower and upper bandpass are set to 28 GHz (left) and 30.4 GHz (right), respectively. The simulation shows that the input power at each passband is separately distributed without being coupled to one another.

Another way to bring about the development of 5G technologies is to increase antenna gain in new mobile devices. This does not amplify the size of a cell signal, but rather increases the distance the signal can travel toward cell towers. If you think back to older, primitive phones where you would have to pull up the antenna to make a call, the circuit of the device would act as a quarter-length monopole antenna. These antennas had the same gain in all azimuthal directions, meaning the electromagnetic waves propagate isotropically on an H-plane so that the signal operates equally toward all directions and can more reliably reach cell towers, no matter the user’s location.

*A 3D far-field radiation pattern of a planar inverted-F antenna (PIFA) in a mobile device.*

As cellphones continued to develop in both function and design, the antenna was miniaturized and embedded inside the phone’s body structure, thus distorting the ideal isotropic radiation pattern. Mobile communication progressed with the development of 3G, 4G, and 4G LTE, and cellphones started to include miniaturized multiband antennas instead of the quarter-length monopole version.

One well-known problem with this era of communication was the spotty service and dropped calls that occurred when using your phone in certain areas. In some instances, it could even make a difference of standing in one area of a room versus another part of the same room. My colleague Jiyoun Munn, who develops the RF Module, explains why this would often happen: “When you are talking on a cellphone, you usually do not know exactly where the cell tower is located in relation to your phone. The occurrence of multipath fading from indoor propagation also contributes to this problem.”

The 5G network requires much higher frequencies. As Jiyoun mentions: “Since the attenuation in the air is more severe at higher frequencies while electromagnetic waves propagate, antennas will need to have an increased gain to reach a longer distance. Higher antenna gain means more directionality of its radiation pattern. As a consequence, the antenna’s visibility, or angular coverage, is very narrow.” Because of this, cellphones would see the base stations in a very limited range.

*A quarter-length monopole antenna (above) propagates isotropically and operates on a lower frequency, while a phased array antenna (below) scans for cell tower signals with a farther gain at higher frequencies.*

In order to overcome the shortcoming of high-gain antennas regarding their angular coverage, it is necessary to use an active electronically scanned array (AESA), also known as a *phased array*, concept that shapes the radiation pattern and steers the beam from an antenna array by controlling the relative phases and magnitudes of the input signal. “The arithmetic phase progression on each antenna element in the antenna array changes the maximum radiation direction,” notes Jiyoun. “The direction of maximum radiation is normal to the equiphase plane, so the radiation pattern is tilted to the direction of the faster antenna element in terms of phase.” This is the basic idea of a phased array, which can steer the beam toward a desired direction.

*The far-field radiation pattern of a monopole antenna array.*

The optimal antenna for 5G technologies is a phased array antenna that can be built with microstrip patch antennas, which are made up of a cluster of regular antennas. By utilizing phase progression and the weight factor on each array element, the angular coverage and gain for the 5G network can be optimized. Using simulation, we can evaluate the design of a phased antenna array for 5G performance. Computer simulation makes it simple to compute the far-field radiation pattern and perform a full-scale far-field analysis for a variety of input parameters.

*Simulation results for an 8×8 phased array antenna.*

The Slot-Coupled Microstrip Patch Antenna Array Synthesizer demo app has a simplified user interface that can be used to run quick design tests to develop prototypes for 5G antenna designs. The app can be launched and run using a web browser, even from a remote place. “With this app,” Jiyoun says, “design engineers can examine the asymptotic solution of the antenna, and they can also share the app with other colleagues on their team in order to work together to build the optimal device.” Because the app is intuitive and specialized for this specific use, your own antenna design can be tested and retested in just 90 seconds, rather than the two days it would take to run a full computer simulation. Building simulation apps is an effective and simple way to perform these analyses.

*The Slot-Coupled Microstrip Patch Antenna Array Synthesizer app.*

With the arrival of 5G, we will find ourselves making room for an even more technology-based society. The Internet of Things, which also goes by monikers such as the *Internet of Everything* and the *Industrial Internet*, is the term used to describe this new age of smart devices and information sharing. The popularity of the Internet of Things may be because of the fact that nearly every industry can utilize some form of IoT technology.

Home automation and wearables, such as fitness tracking devices and smart watches, are popular uses of IoT that are able to track a person’s activity and internal statistics, connect this information to accessible apps on their smartphones, and analyze the data into knowledge that the individual consumer can apply to their home or lifestyle. As the 5G network continues to develop, more novel uses of IoT technology become available. In healthcare, IoT devices automatically disperse medicine and monitor patients based on their statistics and activity. In the media, smart devices track our entertainment and shopping preferences to automatically message us about related product information.

*A fitness tracking bracelet, one example of how consumers use the Internet of Things.*

The data collected by these smart devices — whether tracking temperature, footsteps, environmental conditions, or various other factors — needs to be analyzed for the entire system of devices to be useful. The IoT system must then take the analyzed information and “tell” the smart object or objects what to do with it (turn a device on or off, send a message, disperse a medication, etc.) to complete the cycle.

By studying the RF interference between these smart devices, we can create the best Internet of Things possible. There is almost no limit to the different applications of the Internet of Things, and by contributing to the development of the 5G network, we can optimize how each device communicates with each other for efficient operation. With a streamlined 5G network, the Internet of Things will be able to work to its fullest potential and become a global reality in just a matter of years.

The arrival of the 5G network is only a few years away. By working with RF applications such as antenna gain, frequency range, and beam progression in more detail, we can ensure that by the year 2020, the technology of the future will be readily accessible to us all. Using the power of computer simulation and simulation apps, we can ensure that we have a hand in creating exceptionally fast and reliable technology for a new era in global communication. Let’s get started developing 5G technology — and a more connected world — today.

- Try it yourself: Check out these tutorial models to get started on 5G optimization:
- Read more about 5G, the Internet of Things, and wearable technology on the COMSOL Blog:

Electrical cables, also called transmission lines, are used everywhere in the modern world to transmit both power and data. If you are reading this on a cell phone or tablet computer that is “wireless”, there are still transmission lines within it connecting the various electrical components together. When you return home this evening, you will likely plug your device into a power cable to charge it.

Various transmission lines range from the small, such as coplanar waveguides on a printed circuit board (PCB), to the very large, like high voltage power lines. They also need to function in a variety of situations and conditions, from transatlantic telegraph cables to wiring in spacecraft, as shown in the image below. Transmission lines must be specially designed to ensure that they function appropriately in their environments, and may also be subject to further design goals, including required mechanical strength and weight minimization.

*Transmission wires in the payload bay of the OV-095 at the Shuttle Avionics Integration Laboratory (SAIL).*

When designing and using cables, engineers often refer to parameters per unit length for the series resistance (R), series inductance (L), shunt capacitance (C), and shunt conductance (G). These parameters can then be used to calculate cable performance, characteristic impedance, and propagation losses. It is important to keep in mind that these parameters come from the electromagnetic field solutions to Maxwell’s equations. We can use COMSOL Multiphysics to solve for the electromagnetic fields, as well as consider multiphysics effects to see how the cable parameters and performance change under different loads and environmental conditions. This could then be converted into an easy-to-use app, like this example that calculates the parameters for commonly used transmission lines.

Here, we examine a coaxial cable — a fundamental problem that is often covered in a standard curriculum for microwave engineering or transmission lines. The coaxial cable is so fundamental that Oliver Heaviside patented it in 1880, just a few years after Maxwell published his famous equations. For the students of scientific history, this is the same Oliver Heaviside who formulated Maxwell’s equations in the vector form that we are familiar with today; first used the term “impedance”; and helped develop transmission line theory.

Let us begin by considering a coaxial cable with dimensions as shown in the cross-sectional sketch below. The dielectric core between the inner and outer conductors has a relative permittivity () of 2.25 – j*0.01, a relative permeability () of 1, and a conductivity of zero, while the inner and outer conductors have a conductivity () of 5.98e7 S/m.

*The 2D cross section of the coaxial cable, where we have chosen a = 0.405 mm, b = 1.45 mm, and t = 0.1 mm. *

A standard method for solving transmission lines is to assume that the electric fields will oscillate and attenuate in the direction of propagation, while the cross-sectional profile of the fields will remain unchanged. If we then find a valid solution, uniqueness theorems ensure that the solution we have found is correct. Mathematically, this is equivalent to solving Maxwell’s equations using an *ansatz* of the form , where () is the complex propagation constant and and are the attenuation and propagation constants, respectively. In cylindrical coordinates for a coaxial cable, this results in the well-known field solution of

\begin{align}

\mathbf{E}&= \frac{V_0\hat{r}}{rln(b/a)}e^{-\gamma z}\\

\mathbf{H}&= \frac{I_0\hat{\phi}}{2\pi r}e^{-\gamma z}

\end{align}

\mathbf{E}&= \frac{V_0\hat{r}}{rln(b/a)}e^{-\gamma z}\\

\mathbf{H}&= \frac{I_0\hat{\phi}}{2\pi r}e^{-\gamma z}

\end{align}

which then yields the parameters per unit length of

\begin{align}

L& = \frac{\mu_0\mu_r}{2\pi}ln\frac{b}{a} + \frac{\mu_0\mu_r\delta}{4\pi}(\frac{1}{a}+\frac{1}{b})\\

C& = \frac{2\pi\epsilon_0\epsilon'}{ln(b/a)}\\

R& = \frac{R_s}{2\pi}(\frac{1}{a}+\frac{1}{b})\\

G& = \frac{2\pi\omega\epsilon_0\epsilon''}{ln(b/a)}

\end{align}

L& = \frac{\mu_0\mu_r}{2\pi}ln\frac{b}{a} + \frac{\mu_0\mu_r\delta}{4\pi}(\frac{1}{a}+\frac{1}{b})\\

C& = \frac{2\pi\epsilon_0\epsilon'}{ln(b/a)}\\

R& = \frac{R_s}{2\pi}(\frac{1}{a}+\frac{1}{b})\\

G& = \frac{2\pi\omega\epsilon_0\epsilon''}{ln(b/a)}

\end{align}

where is the sheet resistance and is the skin depth.

While the equations for capacitance and shunt conductance are valid at any frequency, it is extremely important to point out that the equations for the resistance and inductance depend on the skin depth and are therefore only valid at frequencies where the skin depth is much smaller than the physical thickness of the conductor. This is also why the second term in the inductance equation, called the *internal inductance*, may be unfamiliar to some readers, as it can be neglected when the metal is treated as a perfect conductor. The term represents inductance due to the penetration of the magnetic field into a metal of finite conductivity and is negligible at sufficiently high frequencies. (The term can also be expressed as .)

For further comparison, we can compute the DC resistance directly from the conductivity and cross-sectional area of the metal. The analytical equation for the DC inductance is a little more involved, and so we quote it here for reference.

L = \frac{\mu}{2\pi}\left\{ln\left(\frac{b+t}{a}\right) + \frac{2\left(\frac{b}{b+t}\right)^2}{1- \left(\frac{b}{b+t}\right)^2} ln\left(\frac{b+t}{b}\right) -\frac{3}{4} + \frac{\frac{\left(b+t\right)^4}{4} -\left(b+t\right)^2b^2+b^4\left(\frac{3}{4} + ln\frac{\left(b+t\right)}{b}\right) }{\left(\left(b+t\right)^2-b^2\right)^2}\right\}

Now that we have values for C and G at all frequencies, DC values for R and L, and asymptotic values for their high-frequency behavior, we have excellent benchmarks for our computational results.

When setting up any numerical simulation, it is important to consider whether or not symmetry can be used to reduce the model size and increase the computational speed. As we saw earlier, the exact solution will be of the form . Because the spatial variation of interest is primarily in the *xy*-plane, we just want to simulate a 2D cross section of the cable. One issue, however, is that the 2D governing equations used in the AC/DC Module assume that the fields are invariant in the out-of-plane direction. This means that we will not be able to capture the variation of the ansatz in a single 2D AC/DC simulation. We can find the variation with two simulations, though! This is because the series resistance and inductance depend on the current and energy stored in the magnetic fields, while the shunt conductance and capacitance depend on the energy in the electric field. Let’s take a closer look.

Since the shunt conductance and capacitance can be calculated from the electric fields, we begin by using the *Electric Currents* interface.

*Boundary conditions and material properties for the *Electric Currents* interface simulation.*

Once the geometry and material properties are assigned, we assume that the conductors are equipotential (a safe assumption, since the conductivity difference between the conductor and the dielectric will generally be near 20 orders of magnitude) and set up the physics by applying V_{0} to the inner conductor and grounding the outer conductor to solve for the electric potential in the dielectric. The above analytical equation for capacitance comes from the following more general equations

\begin{align}

W_e& = \frac{1}{4}\int_{S}{}\mathbf{E}\cdot \mathbf{D^\ast}d\mathbf{S}\\

W_e& = \frac{C|V_0|^2}{4}\\

C& = \frac{1}{|V_0|^2}\int_{S}{}\mathbf{E}\cdot \mathbf{D^\ast}d\mathbf{S}

\end{align}

W_e& = \frac{1}{4}\int_{S}{}\mathbf{E}\cdot \mathbf{D^\ast}d\mathbf{S}\\

W_e& = \frac{C|V_0|^2}{4}\\

C& = \frac{1}{|V_0|^2}\int_{S}{}\mathbf{E}\cdot \mathbf{D^\ast}d\mathbf{S}

\end{align}

where the first equation is from electromagnetic theory and the second from circuit theory.

The first and second equations are combined to obtain the third equation. By inserting the known fields from above, we obtain the previous analytical result for C in a coaxial cable. More generally, these equations provide us with a method for obtaining the capacitance from the fields for any cable. From the simulation, we can compute the integral of the electric energy density, which gives us a capacitance of 98.142 pF/m and matches with theory. Since G and C are related by the equation

G=\frac{\omega\epsilon'' C}{\epsilon'}

we now have two of the four parameters.

At this point, it is also worth reiterating that we have assumed that the conductivity of the dielectric region is zero. This is typically done in the textbook derivation, and we have maintained that convention here because it does not significantly impact the physics — unlike our inclusion of the internal inductance term discussed earlier. Many dielectric core materials do have a nonzero conductivity and that can be accounted for in simulation by simply updating the material properties. To ensure that proper matching with theory is maintained, the appropriate derivations would need to be updated as well.

In a similar fashion, the series resistance and inductance can be calculated through simulation using the AC/DC Module’s *Magnetic Fields* interface. The simulation setup is straightforward, as demonstrated in the figure below.

*The conductor domains are added to a *Single-Turn Coil* node with the *Coil Group* feature, and the reversed current direction option ensures that the direction of current through the inner conductor is the opposite of the outer conductor, as indicated by the dots and crosses. The single-turn coil will account for the frequency dependence of the current distribution in the conductors, as opposed to the arbitrary distribution shown in the figure.*

We refer to the following equations, which are the magnetic analog of the previous equations, to calculate the inductance.

\begin{align}

W_m& = \frac{1}{4}\int_{S}{}\mathbf{B}\cdot \mathbf{H^\ast}d\mathbf{S}\\

W_m& = \frac{L|I_0|^2}{4}\\

L& = \frac{1}{|I_0|^2}\int_{S}{}\mathbf{B}\cdot \mathbf{H^\ast}d\mathbf{S}

\end{align}

W_m& = \frac{1}{4}\int_{S}{}\mathbf{B}\cdot \mathbf{H^\ast}d\mathbf{S}\\

W_m& = \frac{L|I_0|^2}{4}\\

L& = \frac{1}{|I_0|^2}\int_{S}{}\mathbf{B}\cdot \mathbf{H^\ast}d\mathbf{S}

\end{align}

To calculate the resistance, we use a slightly different technique. First, we integrate the resistive loss to determine the power dissipation per unit length. We can then use the familiar to calculate the resistance. Since R and L vary with frequency, let’s take a look at the calculated values and the analytical solutions in the DC and high-frequency (HF) limit.

*“Analytic (DC)” and “Analytic (HF)” refer to the analytical equations in the DC and high-frequency limits, respectively, which were discussed earlier. Note that these are both on log-log plots.*

We can clearly see that the computed values transition smoothly from the DC solution at low frequencies to the high-frequency solution, which is valid when the skin depth is much smaller than the thickness of the conductor. We anticipate that the transition region will be approximately located where the skin depth and conductor thickness are within one order of magnitude. This range is 4.2e3 Hz to 4.2e7 Hz, which is exactly what we see in the results.

Now that we have completed the heavy lifting to calculate R, L, C, and G, there are two other significant parameters that can be determined. They are the characteristic impedance (Z_{c}) and complex propagation constant (), where is the attenuation constant and is the propagation constant.

\begin{align}

Z_c& = \sqrt{\frac{(R+j\omega L)}{(G+j\omega C)}}\\

\gamma& = \sqrt{(R+j\omega L)(G+j\omega C)}

\end{align}

Z_c& = \sqrt{\frac{(R+j\omega L)}{(G+j\omega C)}}\\

\gamma& = \sqrt{(R+j\omega L)(G+j\omega C)}

\end{align}

In the figure below, we see these values calculated using the analytical formulas for both the DC and high-frequency regime as well as the values determined from our simulation. We have also included a fourth line: the impedance calculated using COMSOL Multiphysics and the RF Module, which we will discuss shortly. As can be seen, our computations agree with the analytical solutions in their respective limits, as well as yielding the correct values through the transition region.

*A comparison of the characteristic impedance, determined using the analytical equations and COMSOL Multiphysics. The analytical equations plotted are from the DC and high-frequency (HF) equations discussed earlier, while the COMSOL Multiphysics results use the AC/DC and RF Modules. For clarity, the width of the “RF Module” line has been intentionally increased.*

Electromagnetic energy travels as waves, which means that the frequency of operation and wavelength are inversely proportional. As we continue to solve at higher and higher frequencies, we need to be aware of the relative size of the wavelength and electrical size of the cable. As discussed in a previous blog post, we should switch from the AC/DC to RF Module at an electrical size of approximately λ/100. If we use the cable diameter as the electrical size and the speed of light inside the dielectric core of the cable, this yields a transition frequency of approximately 690 MHz.

At these higher frequencies, the cable is more appropriately treated as a waveguide and the cable excitation as a waveguide mode. Using waveguide terminology, the mode we have been examining is a special type of mode called *TEM* that can propagate at any frequency. When the cross section and wavelength are comparable, we also need to account for the possibility of higher-order modes. Unlike a TEM mode, most waveguide modes can only propagate above a characteristic cut-off frequency. Due to the cylindrical symmetry in our example model, there is an equation for the cut-off frequency of the first higher-order mode, which is a TE11 mode. This cut-off frequency is f_{c} = 35.3 GHz, but even with the relatively simple geometry, the cut-off frequency comes from a transcendental equation that we will not examine further in this post.

So what does this cut-off frequency mean for our results? Above that frequency, the energy carried in the TEM mode that we are interested in has the potential to couple to the TE11 mode. In a perfect geometry, like we have simulated here, there will be no coupling. In the real world, however, any imperfections in the cable could cause mode coupling above the cut-off frequency. This could result from a number of sources, from fabrication tolerances to gradients in the material properties. Such a situation is often avoided by designing cables to operate below the cut-off frequency of higher-order modes so that only one mode can propagate. If that is of interest, you can also use COMSOL Multiphysics to simulate the coupling between higher-order modes, as with this Directional Coupler tutorial model (although beyond the scope of today’s post).

Simulation of higher-order modes is ideally suited for a Mode Analysis study using the RF or Wave Optics modules. This is because the governing equation is , which is exactly the form that we are interested in. As a result, Mode Analysis will directly solve for the spatial field and complex propagation constant for a predefined number of modes. We can use the same geometry as before, except that we only need to simulate the dielectric core and can use an Impedance boundary condition for the metal conductor.

*The results for the attenuation constant and effective mode index from a Mode Analysis. The analytic line in the left plot, “Attenuation Constant vs Frequency”, is computed using the same equations as the high-frequency (HF) lines used for comparison with the results of the AC/DC Module simulations. The analytic line in the right plot, “Effective Refractive Index vs Frequency”, is simply . For clarity, the size of the “COMSOL — TEM” lines has been intentionally increased in both plots.*

We can clearly see that the Mode Analysis results of the TEM mode match the analytic theory, and that the computed higher-order mode has its onset at the previously determined cut-off frequency. It is also incredibly convenient that the complex propagation constant is a direct output of this simulation and does not require calculations of R, L, C, and G. This is because is explicitly included and solved for in the Mode Analysis governing equation. These other parameters can be calculated for the TEM mode, if desired, and more information can be found in this demonstration in the Application Gallery. It is also worth pointing out that this same Mode Analysis technique can be used for dielectric waveguides, like fiber optics.

At this point, we have thoroughly analyzed a coaxial cable. We have calculated the distributed parameters from the DC to high-frequency limit and examined the first higher-order mode. Importantly, the Mode Analysis results only depend on the geometry and material properties of the cable. The AC/DC results require the additional knowledge of how the cable is excited, but hopefully you know what you’re attaching your cable to! We used analytic theory solely to compare our simulation results against a well-known benchmark model. This means that the analysis could be extended to other cables, as well as coupled to multiphysics simulations that include temperature change and structural deformation.

For those of you who are interested in the fine details, here are a few extra points in the form of hypothetical questions.

- “Why didn’t you mention and/or plot all of the characteristic impedance and distributed parameters for the TE11 mode?”
- This is because only TEM modes have a uniquely defined voltage, current, and characteristic impedance. It is still possible to assign some of these values for higher-order modes, and this is discussed further in texts on transmission line theory and microwave engineering.

- “When I solve for modes using a Mode Analysis study, they are labeled by the value of their effective index. Where did TEM and TE11 come from?”
- These names come from the analytic theory and were used for convenience when discussing the results. This name assignment may not be possible for an arbitrary geometry, but what’s in a name? Would not a mode by any other name still carry electromagnetic energy (excluding nontunneling evanescent waves, of course)?

- “Why is there an extra factor of ½ in several of your calculations?”
- This comes up when solving electromagnetics in the frequency domain, notably when multiplying two complex quantities. When taking the time average, there is an extra factor of ½ as opposed to the equation in the time domain (or at DC). For more information, you can refer to a text on classical electromagnetics.

The following texts were referred to during the writing of this post and are excellent sources of additional information:

*Microwave Engineering*, by David M. Pozar*Foundations for Microwave Engineering*, by Robert E. Collin*Inductance Calculations*, by Frederick W. Grover*Classical Electrodynamics*, by John D. Jackson

The detection and removal of landmines and IEDs is important for both humanitarian and military purposes. While the term for the process of detecting these mines — *minesweeping* — is the same in both cases, the removal process is referred to as *demining* in times of relative peace and *mine clearance* during times of war. The latter case refers to when mines are removed from active combat zones for tactical reasons as well as for the safety of soldiers.

When a war ends, landmines may still be in the ground and detonate under civilians, leading to casualties. The majority of the mines are located in developing countries that are trying to recover from recent wars. Aside from being politically unstable, these countries are unable to farm viable land that is strewn with IEDs, keeping their economies in poor positions. Unfortunately, finding and removing the dangerous devices can be rather difficult.

*A U.S. Army detection vehicle digs up an IED during a training exercise.*

In efforts to locate and remove landmines, a mechanical approach is one option. With this method, an area with known landmines is bombed or plowed using sturdy, mine-resilient tanks to detonate them safely. For a more natural approach, dogs, rats, and even honeybees are trained to detect landmines with their sense of smell, and they are usually too light to trigger detonation. Biological detection methods offer another option, utilizing plants and bacteria that change color or become fluorescent in the presence of certain explosive materials. Once the mines are detected, they are safely removed from the area.

*A trained rat searches for landmines in a field.*

One method can provide more knowledge about an area that contains IEDs: *electromagnetic detection*. An important element within electromagnetic detection is a process called *ground-penetrating radar* (GPR), which uses electromagnetic waves to create an image of a subsurface, revealing the buried objects.

GPR involves sending electromagnetic waves into a subsurface (the ground) through an antenna. The transmitter of the antenna sends the waves, and the receiver collects the energy reflected off of the different objects in the subsurface, recording the patterns as real-time data.

*Data from a traditional GPR scan of a historic cemetery.*

With recent developments in landmine cloaking technology, identifying buried objects through traditional GPR has become more challenging. Dr. Reginald Eze and George Sivulka from the City University of New York — LaGuardia Community College and Regis High School sought to improve electromagnetic IED detection by testing the method under different variables and environmental situations. By creating an intelligent subsurface sensing template with the help of COMSOL Multiphysics, the research team was able to determine better ways to safely locate and remove landmines and IEDs.

Let’s dive a bit deeper into their simulation research, which was presented at the COMSOL Conference 2015 Boston.

When setting up their model of the mine-strewn area, the researchers needed to ensure that they were accurately portraying a real-world landmine scenario. They started with a basic 2D geometry and defined the target objects and boundaries. The different layers of the model featured:

- A homogenous soil surface with varying levels of moisture
- Air
- The landmine

The physical parameters in the model included relative permittivity; relative permeability; and the conductivity of the air, dry soil, wet soil, and TNT (the explosive material used in the landmine).

Using the *Electromagnetic Waves, Frequency Domain* interface in the RF Module, the team built a model consisting of air, soil, and the landmine. Additionally, a perfectly matched layer (PML) was used to truncate the modeling domain and act as a transparent boundary to outgoing radiation, thus allowing for a small computational domain. A transverse electric (TE) plane wave was applied to the computational domain in the downward direction. The scattering results were analyzed via LiveLink™ *for* MATLAB®.

*The scattering effect of a wave on a landmine in wet soil (left) compared to dry soil (right).*

The research team studied the radar cross section (RCS), which quantifies the scattering of the waves off of various objects. Their studies were based on five key factors:

- Projected cross section
- Reflectivity
- Directivity
- Contrast between the landmine and the background materials
- Shapes of the landmine and the ground surface

With each adjustment to an environmental parameter, a parametric sweep was performed every 0.5 GHz from 0.5 GHz to 3.0 GHz. The parametric sweeps enabled an educated selection of the optimal frequency for IED detection in every possible environmental scenario.

*A parametric sweep used to identify the optimal frequency for a landmine detection system.*

The simulation results pointed out the differences in scattering patterns depending on the parameters. For example, as the depth of the target increased, the scattering effects became more negligible. The relation between how deep the mine was buried and the scattering showed a clear connection to the soil’s interference with the wave.

The results also showed that dry soil has more interference with the RF signal than wet soil. Both the size and depth of the mine were related to the amount of scattering. For instance, the more shallow the mine was buried, the more easily it was detected. The parameter sweep of the frequencies indicated that the optimal frequency to detect anomalies in the subsurface scan was 2 GHz.

*The scattering amplitude for a landmine buried in an air/wet soil/dry soil layer combination (left) compared to air/dry soil/wet soil (right).*

Studying the parameters and their effects on the scattering patterns of the waves offers insight into the objects that are being detected, including their chemical composition. Such knowledge makes it easier to identify an object, whether a TNT-based landmine, another type of IED, a rock, or a tree root.

Through simulation analyses, the researchers gained a more comprehensive understanding of the microphysical parameters and their impact on the scattering of waves off of different objects. This gave them a better idea of the remote sensing behavior, offering potential for increased accuracy in landmine detection and removal. Such advancements could lead to safer environments, particularly within developing areas of the world.

- Read the full paper: “Remote Sensing of Electromagnetically Penetrable Objects: Landmine and IED Detection“
- View the research poster, which received the Popular Choice Poster award at the COMSOL Conference 2015 Boston

*MATLAB is a registered trademark of The MathWorks, Inc.*

Skin cancer affects numerous people around the world and is recognized as the most common form of cancer in the United States. Despite its prominence, this disease is highly treatable when skin tumors are detected early and removed. These tumors can be identified during monthly self-examinations and with the help of medical professionals. However, noninvasive skin tumor detection tools, such as *dielectric probes*, are emerging as an alternative.

To identify tumors, dielectric probes can utilize a millimeter wave with frequencies of either 35 GHz or 95 GHz. This millimeter wave has a sensitive reflective response to water content, which it uses as a means of detecting skin tumors. Such tumors possess a different *scattering parameter* or *S-parameter* than that of healthy skin, and the probes locate tumors by identifying these abnormal S-parameters.

Through simulation, we can evaluate the functionality of a conical dielectric probe and ensure its safety as an alternative for detecting skin tumors.

Our 2D axisymmetric tutorial model consists of a metallic circular waveguide, a tapered PTFE dielectric rod, a skin phantom, an air domain, and perfectly matched layers (PMLs).

In this example, we model our waveguide as a perfect electric conductor (PEC) and assume that its conductivity is high enough to negate any loss. The waveguide terminates at a circular port on one end and is connected to the dielectric rod on the other end. The dielectric rod is designed for impedance matching between the waveguide and the air domain. It is symmetrically tapered and supported on the rim of the waveguide by a ring structure. The tip of the rod touches the skin phantom, and the whole device uses a low-power 35 GHz Ka-band millimeter wave when operating.

*Left: Dielectric probe model. Right: The probe interacting with a skin tumor.*

To analyze the validity of the probe design, we first observe the electromagnetic properties of the circular waveguide and dielectric probe without the skin phantom. From the simulation results, we can conclude that the probe is functional.

*The dielectric rod’s wave propagation without the skin phantom.*

Next, we increase the complexity of our model with two additions: a healthy skin phantom and a skin phantom containing a tumor. This enables us to calculate and compare the S-parameters for each of these cases. Our findings show that the S_{11} value of the healthy phantom is -9.84 dB, while the phantom containing a tumor features an S-parameter value of -8.87 dB. These values indicate that more reflection occurs when the probe touches the skin phantom with a tumor. We can expect such a result, as tumors have a higher moisture content than healthy skin.

While we found the S-parameter approach to be functional, we also want to ensure that it is safe. To do so, we study the temperature distribution over the skin phantom surface in order to find the fraction of necrotic (damaged due to heat) tissue.

*Left: Temperature variation on a skin phantom with a tumor. Right: Plot of the necrotic tissue.*

Our analysis of a skin phantom with a tumor shows that, after ten minutes of low-powered millimeter wave exposure, the temperature change is within 0.06°C. Even at the relatively hotter spot, the temperature remains very close to the initial temperature of 34°C. With this information, we can assume that there are no harmful temperature differences. Furthermore, our results show that the fraction of necrotic tissue is negligibly small, indicating that the temperature rise induced by the probe has a negligible effect on the tissue.

- Download the tutorial: Modeling a Conical Dielectric Probe for Skin Cancer Diagnosis
- Read this blog post to learn about the use of simulation in one form of cancer treatment: Hyperthermic Oncology: Hyperthermia for Cancer Treatment

Electronic devices are a key component in our day-to-day lives. Imagine if it were possible to charge these devices without the need for wires or cords. The development of wireless power transfer (WPT) technology is making this possible, offering a simplified approach to charging electronics, including the ability to charge multiple devices at once. As the technology continues to grow, charging electronic devices wirelessly is becoming a reality in more and more applications, from phones to electric cars.

*Wireless charging spots at a coffee shop. Image by Veredai from Powermat Technologies — Own work, via Wikimedia Commons.*

As previously noted, WPT offers ubiquitous power by transmitting electrical power without the use of solid wires or conductors. Generally, this is achieved by using an electromagnetic field to transfer energy between two separate objects. Here, a power transmitting unit (PTU), which is connected to a power source, generates a magnetic field and a power receiving unit (PRU) captures this energy and converts it into usable power.

*A simple illustration of wireless power transfer. The PTU is shown on the left and the PRU is shown on the right.*

For some WPT systems, an important concern is that the orientation between the PTU and PRU can greatly affect the energy coupling. As such, a device may have to be carefully aligned on a PTU in order for it to charge. But at what point does a lack of alignment between a PTU and a PRU impede the energy coupling?

Here, we’ll use simulation to investigate how a change in orientation affects wireless power transfer antennas.

The wireless power transfer tutorial presented today analyzes the energy coupling between two circular loop antennas. In this example, the antennas are made of a polytetrafluoroethylene (PTFE) board and have a thin copper layer on top, which is modeled as a perfect electric conductor (PEC). Each of the devices feature a lumped inductor and a lumped port that can excite or terminate the antenna.

The antennas have a UHF RFID operating frequency of 915 MHz and a shape that inherently provides the ability to perform inductive coupling.

*The model geometry. Note that the air domain and perfectly matched layers (PMLs) are not included here.*

In our simulation, the receiving antenna is rotating and the transmitting antenna maintains a fixed location. This set up is similar to having a fixed position for a charger, but placing a mobile phone on it at different angles.

The changing orientation allows us to identify the impact a change in position has on the energy coupling. To visualize this effect, we model the E-field norm distribution and the power flow from the transmitting antenna at different rotation angles of the receiving antenna.

*The E-Field norm and power flow (arrow plot) of our wireless power transfer antennas.*

The results show that when the antennas are facing each other, which occurs when the receiving antenna’s angle of rotation is 0 degrees, the fields are strongly coupled — an indication of successful wireless power transfer. However, by the time the receiving antenna reaches 90 degrees of rotation, we don’t observe any distortion when the power flow penetrates the receiving antenna. At this rotation, there is almost no coupling or hot coupling area. As such, we can conclude that the power transfer between these two antennas is greatly reduced at this angle.

In the future, we can increase the functionality of WPT antennas by creating systems that function at a wide rage of orientations, enabling the charging of electronics without concern for their particular placement.

- Download the tutorial model: Simulating Wireless Power Transfer in Circular Loop Antennas
- Read a user story about wireless power transfer on page 8 of the 2015
*Multiphysics Simulation*IEEE insert

By now, you are likely familiar with the material known as graphene. Much of the excitement surrounding graphene is due to its exotic material properties. These properties manifest themselves because graphene is a 2D sheet of carbon atoms that is one atomic layer thick. Graphene is discussed as a 2D material, but is it *really* 2D or is it just incredibly thin like a very fine piece of paper? It is one atom thick, so it must have thickness, right?

*A schematic of graphene.*

This is a complex question that is better directed towards researchers within the field. It does, however, lead us to another important question within the simulation environment — should we simulate graphene as a 2D sheet or a thin 3D volume?

To answer this, there are various important contributions that must first be discussed.

From a simulation stand-point, we want our model to accurately represent reality. This is accomplished through verification and validation procedures that often involve comparisons with analytical solutions. In open areas of research such as the investigation of novel materials like graphene, the verification and validation process depends on several interlocking pieces. This is due to the fact that there may not be any benchmarks or analytical results for comparison, and the theoretical predictions may be hypotheses that are awaiting experimental verification.

For graphene, the process begins with a theory — like the random phase approximation (RPA) — that describes the material properties. Graphene of a sufficiently high quality must then be reliably fabricated, and done so in large enough sample sizes for experimental measurements to be conducted. Lastly, the experiments themselves must be performed, with the results analyzed and compared to the theoretical predictions. The process is then repeated as required.

Numerical simulation is an integral part of every stage within the research process. Here, we will focus solely on its use in the comparison of theoretical predictions and experimental results. Theoretical predictions do not always come in simple and straightforward equations. In such cases, the theory can be solved numerically with COMSOL Multiphysics, offering a closer comparison with experimental results.

When performing simulations in active research areas, it is important to keep the previously mentioned research cycle in mind. A simulation can be set up correctly, but if it uses incorrect theoretical predictions for the material properties, the simulation results will not show reliable agreement with the experimental results. Similarly, accurate theoretical predictions must be properly implemented in simulations in order to yield meaningful results — a particularly important concern when modeling graphene, the world’s first 2D material.

So what does it mean for an object to be 2D and how do we correctly implement it in simulation? This brings us back to our original question of whether it is better to model graphene as a 2D layer or a thin 3D material. Perhaps you can see the answer more clearly now. The simulation technique itself needs to be verified during the research process!

Let’s now turn to the experts.

Led by Associate Professor Alexander V. Kildishev, researchers at Purdue University’s Birck Nanotechnology Center are at the forefront of graphene research. Among their many works are graphene devices that are designed in COMSOL Multiphysics and then fabricated and tested experimentally. Professor Kildishev recently joined us for a webinar, “Simulating Graphene-Based Photonic and Optoelectronic Devices”, where he discussed important elements behind the modeling of graphene.

*When designing graphene and graphene-based devices, simulation helps to enhance design and optimization, achieving the highest possible performance.*

During the webinar, Kildishev showed simulation results in which graphene was treated as both a thin 3D volume and a 2D sheet. When conducting this research with his colleagues, he found that the best agreement between simulation results and experimental results is achieved through modeling graphene as a 2D layer. Using COMSOL Multiphysics, Kildishev also showcased simulations of graphene in the frequency and time domains.

To learn more about the simulation of graphene, you can watch the webinar here. We also encourage you to visit the Model Exchange section of our website, where you can download the models featured in the webinar and perform your own simulations of 2D graphene.

]]>

When characterizing electronic devices radiating electromagnetic waves, it is important to make sure that the radiated waves are not returning to the device under test (DUT). Infinite space without surrounding objects is ideal. Such an environment hides effects from reflection (i.e., multipath fading) that will cause a phase distortion when the reflected waves are added to the original wave. The closest equivalent to this setting on Earth is an open field, though there is still a significant effect from the ground.

*An antenna in the middle of an open field. Image by Dr Patty McAlpin, via Wikimedia Commons.*

If we know the exact spatial configuration between a transmitter and a receiver and are sure that the ground is the only object distorting the waves, we can remove the unwanted signal path using a time-gating feature with a network analyzer. It is, however, not ideal to have to haul heavy equipment over to the open field every time you need to take measurements. Instead, it would be more convenient if you had access to a lab providing effectively infinite space — that is, an *anechoic chamber*. The anechoic chamber wall absorbs incident waves and does not interfere with the DUT.

*Antenna measurement in an anechoic chamber. Image by Max Alexander / PromoMadrid, via Wikimedia Commons.*

In an earlier blog post, we demonstrated how to design microwave absorbers using COMSOL Multiphysics and the RF Module. The pyramidal shape of periodic lossy structures gradually attenuates incident waves and generates almost no reflection, making the chamber an interference-free environment.

So, can we use these absorbers to simulate an antenna in the anechoic chamber? Of course!

*A conventional microwave absorber used in an anechoic chamber.*

The geometry of the original pyramidal object is extended to adjust the operating frequency of the absorber for its use with a biconical antenna tuned for the UHF band. The size of the pyramidal object is proportional to the wavelength of interest for the measurement.

The steps for building a model of an anechoic chamber are much like those for building a real-life chamber. We begin by creating an empty room that is 3.9 meters by 3.9 meters by 3.2 meters. The outer wall is covered by a perfect electric conductor mimicking a conductive coating that is thick enough to block all incoming signals from outside the chamber. Absorbers are added on six sides of the walls.

At the center of the chamber, we place our tutorial model of a biconical antenna. Our findings show that the antenna’s performance is very similar to the results found in the example from our Application Gallery. The figure below offers a beautiful visualization of the contours of the magnitude of the electric field.

*Simulation of a biconical antenna in an anechoic chamber.*

Due to the chamber’s complicated geometry and size, this simulation requires more than 16 GB of memory. As we will demonstrate next, there is a way to simplify this process.

My colleague Walter Frei previously highlighted different approaches for modeling a domain with open boundaries — in particular, perfectly matched layers and scattering boundary conditions. Using perfectly matched layers (PMLs), we can create the perfect anechoic chamber within the simulation environment.

*The frames of a biconical antenna are modeled as boundaries. The surrounding air domain and perfectly matched layers are required for the simulation. Only half of the PMLs are shown in this figure.*

For this example, the operating frequency is in the conventional VHF range, which extends from 60 MHz to 240 MHz. To simplify modeling steps and reduce the required computational resources, we assume that the antenna frame structure is geometrically flat and very thin. Because the thickness is greater than the skin depth in the given frequency range, it is reasonable to model the structure as a perfect electric conductor.

A lumped port with a 50 Ω reference impedance is assigned to the gap located at the center of the two structures composed of hexagonal frames. The antenna is enclosed by a spherical air domain. The outermost layers of the air domain are configured as PMLs that absorb all outgoing radiation from the antenna and work as an anechoic chamber during the simulation.

*Electric field distribution on the* yz*-plane in dB at 70 MHz. The electric field is resonant over the entire antenna structure.*

*Voltage standing wave ratio (VSWR) plot with a log scale on the* y*-axis. It presents a VSWR of approximately 3:1 on average.*

The figure above illustrates the electric field distribution in dB, as well as an arrow plot depicting the directional properties of the field at 70 MHz. When the frequency is in the lower range, the electric field is confined to the entire structure. As the frequency increases, the reacting area gradually decreases. Thus, the part of the antenna structure that is responsive to electromagnetic waves becomes shorter around the center of the lumped port. The computed VSWR is approximately 3:1 on average. This is close to the performance of commercial off-the-shelf products of biconical antennas for EMI/EMC measurements.

*3D far-field pattern at 70 MHz. The pattern resembles that of a typical half-wave dipole antenna.*

The 3D far-field radiation pattern shows the same omnidirectional characteristics on the H-plane. The suggested modeling configuration requires less than 2 GB of memory to compute the far-field radiation pattern and the VSWR of a biconical antenna made of lightweight hexagonal frames. Thus, it is much easier and faster to set up this model than the full anechoic chamber simulation.

- Check out these related blog posts:
- Download these tutorial models from our Application Gallery: