Serious engineering and scientific computing problems involve working with large amounts of data — anywhere from megabytes to gigabytes of data are commonly generated during a COMSOL Multiphysics simulation. To generate and store this data, we will want to use computers with fast processors, a lot of Random Access Memory (RAM), and a large hard drive. To visualize this data, it is important to have a high-end graphics card.
The COMSOL Multiphysics workflow can be divided into four steps:
Under ideal conditions, everyone would be working on a high-end computer with more than enough memory and processing power for all of these steps. But realistically, we have to make do with what is available on each user’s desktop. If we need to solve larger models, we will want to access a shared computing resource across a network.
Here lies the issue: Passing data back and forth across a network is a lot slower than passing data around inside your computer, especially when it comes to the graphics-intensive postprocessing step. This becomes particularly apparent when using a virtual desktop application, which has to continuously send many megabytes of graphics data over the network. So, let’s see how the COMSOL Multiphysics Client-Server mode addresses this issue.
One COMSOL Multiphysics Floating Network License is used during Client-Server operation.
Users typically start COMSOL Multiphysics on their local desktop or laptop computer and start drawing the geometry, defining material properties, applying loads, and setting boundary conditions. Since this primarily involves selecting objects on the screen and typing, the computational requirements are quite low.
Once your users start meshing and solving larger models, however, they can quickly exceed their local computational resources. Meshing and solving takes both time and RAM. If a model requires more RAM than what is available locally, the computer will become quite unresponsive for some time. Rather than upgrading each computer, you can use Client-Server mode so that users can access a remote computing resource.
At any time while using COMSOL Multiphysics, it is possible to connect to a remote computing resource via the Client-Server mode of operation. This is a two-step process. First, log onto the remote system and invoke the COMSOL Multiphysics Server, which will start up the COMSOL Multiphysics Server process and open a network connection. Second, on the local machine, simply enter the network connection information into an open session of COMSOL Multiphysics. The software then transparently streams the model data and results back and forth over the network and uses the remote computing resource for all computations.
It is possible to disconnect from the COMSOL Multiphysics Server as long as it is not in the middle of a computation. This will free up the shared computing resource for other users. It can be good practice to do so during postprocessing, which primarily involves visualization of data and is less computationally intensive. Displaying the results is always handled locally, so it is important to have a good graphics card. (Tip: Check out this list of tested graphics cards.)
You can see that running your simulations in Client-Server mode will allow each part of your IT infrastructure do what it does best. You can run the COMSOL Multiphysics Server on your high-performance computing resources, but your users will be working on their local machines for graphics visualization. Other than the number of licenses that you have available, there is no limit to the number of simultaneous users running a server at any one time. In fact, you can run COMSOL Multiphysics in Client-Sever mode all the time. Preprocessing, meshing, solving, and any non-graphical postprocessing computations can all be done on the server. By taking advantage of your organization’s shared computing resources, your users will not need an upgrade of their desktop computer every time they want to run a larger COMSOL Multiphysics model.
The COMSOL Client-Server capabilities, available as part of the Floating Network License, allow you to run your large COMSOL Multiphysics models on the best computers that you have available, so you do not have to buy every user a large workstation. It is a great option for any organization.
From an administrative point of view, the Floating Network License (FNL) can be configured to control and restrict who uses the license, to know what it is used for, how long it is used for, or which licenses need to be upgraded. Similarly, it allows you to control who can borrow licenses, how many, and for how long.
The License Manager can also generate a report within the log file to disclose license request activity. This report will be useful for seeing who is most active and what modules are popular. With this information, you can identify critical modules required for high-priority projects and then utilize the License Manager to control these. This includes reserving a number of licenses to be used exclusively by the more important projects, but still allowing other projects to continue making progress. Restricting or reserving the use of specific modules or functionalities couldn’t be easier, and is done in the “Options” file.
One use for an FNL is running COMSOL Multiphysics on one network. Therefore, one of the benefits of using this license type is that you can easily install the software on as many machines on the network as you need. Even if you’re the only one using it, you will still be able to install it on your workstation or laptop, a cluster, and as many other machines you might access.
Alternatively, if a number of people on your team will need to access COMSOL Multiphysics, you may want to automate the installation. You, as the administrator, can provide an installation script that automatically installs COMSOL Multiphysics on their systems without the need for any user input. As long as the user can reach the license server, it is really easy to point to it and start modeling without the hassle of contacting IT to get an installation.
A COMSOL Multiphysics FNL allows several users to work with separate computational resources at different points during the day. For instance, you might want to create a model on your workstation in the morning, compute it on a High Performance Computing (HPC) cluster in the afternoon, and check the results on your laptop in the evening.
Through contacting the License Manager, you can create your model on your workstation in the morning, run it on the cluster through the afternoon, and check your results from home in the evening.
But what if you don’t have access to VPN at home, you need to travel, or use COMSOL Multiphysics somewhere else without internet access? There is a solution to this as well. You can borrow a license from the server and specify how long you want to borrow it, and it will be returned when you don’t need it anymore. The only thing you need to do is start your regular COMSOL Multiphysics session (on your laptop, for instance), set up how long the license is going to be borrowed for, and you are ready to disconnect and start simulating on the airplane.
Another benefit with an FNL is the ability to run COMSOL software remotely via a remote desktop application. Whether you’re connecting from your office or home, you can use remote desktop to set up your model and then reconnect later to verify the results. You’re not exhausting your laptop or workstation with working computations, and enables you to multitask.
The FNL allows for utilizing clusters to distribute tasks between the different computation units, thus increasing productivity and allowing for more resource demanding computations. It is very popular to use FNLs for HPC and embarrassingly parallel applications, as it makes use of the COMSOL software’s advanced parallel processing capabilities. You can utilize these capabilities by adding up your workstations, setting up a scheduler, and scheduling your jobs to be run after work hours.
It is worth noting that any FNL will allow for an unlimited number of nodes in your cluster to be utilized for your simulations. Regardless of whether we’re talking about the size of your models, the number of your co-workers or projects, or nodes in a cluster, our Floating Network License is truly scalable.
A spinning wheel will experience centrifugal stresses that result in stresses throughout the part. A regular pattern of holes has been cut into the wheel hub to reduce the mass. The von Mises stresses due to the centrifugal forces are shown. It is desirable to further reduce the mass, while keeping the stresses below a critical value.
Although we could model the entire wheel at once, there is both mirror and rotational symmetry in this part making it possible to reduce the model and thereby minimize the computational requirements. Symmetry boundary conditions are used to restrain the part.
A body load is applied in terms of the rotational velocity, rotational axis, and material density to model the centrifugal force. The model is solved using the stationary solver, that is, assuming a constant rotational speed.
In this case, let’s assume that there is already a manufacturing process in place, and we would like to make a minimal change to the overall design of the part in order to reduce retooling costs. A natural choice of design variables would be to change the radii of the holes in the hub. Therefore, we go back to the geometry sequence and parameterize both the hole radii as well as their locations. We can also figure out, based purely on a geometric analysis, that there must be bounds on the maximum radius of each hole, otherwise the regions between the holes would get too thin and the holes would overlap. We will also put a bound on the minimum radius, since we do not want the holes to disappear completely.
The optimization objective here will be simply to reduce the mass of the part, which is the integral of the material density over all domains.
The optimization objective is to minimize the mass, the integral of the density.
The constraint is a little bit more complex; we want to minimize the peak stress in the part. However, we do not know ahead of time where the peak stress will be. If we make either the inner or outer holes too small, this will lead to a stress concentration around the hole. If we make either of the radii too large, the material between the holes can get too thin, also leading to high stresses. Therefore, we must monitor the maximum stress throughout the part, and constrain this to be below a specified peak stress. This is a non-differentiable constraint, and it specifically requires the gradient-free optimization method.
The peak stress is monitored via a Domain Probe, and given the name PeakStress.
The peak stress variable is constrained to stay within an upper bound.
To solve the optimization problem, an Optimization feature is added to the Study Branch. The Nelder-Mead method is one of the two gradient-free methods (the other one is Coordinate Search). The gradient-free optimization algorithms also allow the geometry to remesh as the dimensions change.
The objective function and constraint is defined from the Optimization branch in the Model Tree. The control variables are given initial conditions, and we specify upper and lower bounds. The optimal design is significantly different — the mass is reduced by 20% while maintaining a constraint on the peak stress.
]]>
The new user interface makes computing such a coordinate system easy by solving a flow-like equation (or in some cases an elasticity equation) where you simply define an inlet and an outlet for the “flow” of your coordinate’s principal axis. Now, curvilinear coordinates that follow curved geometry objects aren’t necessarily uniquely defined. What happens for example if the cross section of your model has narrow parts? Is the anisotropic material of the model trimmed as if by a cookie cutter or is it squeezed together in the narrow parts? Does your object have sharp corners? Because of these ambiguities, three different methods are offered: Diffusion Method, Elasticity Method, and Flow Method. These all give slightly different coordinates corresponding to the underlying equation solved (the equations are outlined in the COMSOL Multiphysics Reference Manual). There is actually also a fourth method, User Defined, where you are free to type in your own mathematical expressions for the curvilinear coordinates’ principal vector field.
The curvilinear coordinates can be used not only for defining anisotropic materials, but for all kinds of other applications, such as electrical currents or visualization. In a new tutorial model of the Nonlinear Structural Materials Module, four different curvilinear coordinates interfaces are used to visualize the fiber directions of an anisotropic Holzapfel-Gasser-Ogden material model. This is a hyperelastic material model for biomechanics applications and is useful for representing collagenous soft tissue in arterial walls.
The figure shows the fiber orientation for different fiber families in the media and adventitia. Arteries have layered structures with the intima inside, followed by the media and the adventitia. The two outer layers are predominantly responsible for the mechanical behavior. Both layers are made of collagenous soft tissues that show prominent strain stiffening. Families of collagen fibers give each layer anisotropic properties. These fiber reinforced structures enable blood vessels to sustain large elastic deformation. The Holzapfel-Gasser-Ogden (HGO) constitutive model described in a literature reference captures the anisotropic nonlinear mechanical response observed in excised artery experiments. This model demonstrates how this hyperelastic material is used in COMSOL Multiphysics, and the results are compared to those reported in the literature. The anisotropic directions are visualized using the new curvilinear coordinates user interface.
Let’s now take a look at a very simple heat transfer example and how we can use the new tool to compute for the temperature in an S-shaped geometry with an anisotropic material that follows the shape. Similar structures are found in smartphones and flat screens and are used as passive heat sinks where a highly anisotropic thermal conductivity spreads heat laterally. This particular example is created in COMSOL by using the Sweep geometry operation to sweep a rectangular cross section along an S-shaped curve.
A rectangular cross section and a parametric curve used as a basis for a geometric sweep.
The final swept geometry.
We can, of course, also import a CAD file. Once we have created the geometry, we continue by adding a Curvilinear Coordinates user interface from the Mathematics branch of the Model Wizard. The next step is to define an Inlet and an Outlet boundary condition. This will define the principal direction of the coordinate system.
An Inlet boundary condition is used to define the source of the curvilinear coordinates’ principal direction.
The other directions are computed automatically but you can guide the definition of those directions by, for example, aligning it with one of the coordinate axes. In this example we use the Diffusion method. Also, if we would like an orthogonal coordinate system to be automatically defined, then we can select the “Create base vector” check-box, as seen in this screenshot:
The “Create base vector” check-box.
Now we can just solve and get a visualization of the coordinate system:
A visualization of the computed curvilinear coordinates.
In this example we also add a Heat Transfer in Solids user interface, and to get something interesting we create a disk (using a Work Plane and a Circle) at the top surface on which we defined a high temperature condition. At the same time, we assign a cold temperature condition along the entire bottom surface of the model. When using an anisotropic material with a high conductivity in one direction, this is the temperature profile we get:
Temperature field in an anisotropic material.
We can see that the heat is spreading in the length-direction of the shape.
You may ask: How do I create and reference an anisotropic material? The first step is to create the material under the Materials node in the Model Tree. In this case, the thermal conductivity in the local x-direction is high, while it is low in the local y- and z-directions. We can formally write this as: k_x >> (k_y=k_z).
Definition of an anisotropic thermal conductivity.
But this is just the definition in a local (abstract) coordinate system. Then we need to reference the curvilinear coordinates that were automatically computed. This is done in the Heat Transfer in Solids settings window:
Reference to the computed curvilinear coordinates.
By referencing the Curvilinear System in this way, the anisotropic thermal conductivity defined by k_x, k_y, and k_z above will “know how to bend” accordingly.
Finally, we need to make sure that we first solve for the curvilinear coordinates and then for the heat transfer in solids. (Otherwise the heat transfer in solids wouldn’t know what material to use.)
This is done by using two Studies. Study 1 is used to compute the curvilinear coordinates and Study 2 is used for the heat transfer in solids. These are the settings for Study 1:
The Study 1 settings.
The Study 2 settings are similar, but here we also need to define what to do with values of variables not solved for. This is the solution of Study 1 and we just reference that in the section called values of dependent variables:
The Study 2 Settings.
Now solve for Study 1 first and then Study 2. That’s it. We’ve seen how quickly we can set up a custom coordinate system for any shape. In this example we used a simple S-shaped geometry, but you can also try this now for geometry models with branches and other more complex geometrical features. Since this curvilinear coordinate method is computational, it works for any CAD model. Enjoy!
]]>
Let us start by looking at some data that was generated in another analysis package:
%X | Y | Z | VectorX | VectorY | VectorZ |
-0.03041 | 0.013353 | 0.138253 | 0.001493 | 0.003518 | -0.00302 |
-0.03862 | 0.01627 | 0.137537 | 0.001332 | 0.003296 | -0.00329 |
-0.0355 . . . |
0.010981 | 0.132823 | 9.60E-04 | 0.00287 | -0.00287 |
The first line you see is a header for the columnar data. We have XYZ data, and at each of these points we have the x-, y-, and z-components of a vector, which is a force that we will want to read into COMSOL Multiphysics. The remaining rows of the file are the point cloud data.
To read in this data, go to Model > Definitions > Functions and define a new Interpolation function. Like this:
Next you want to fill in the form as shown below:
Set the Data Source to File, and use the Browse button to locate the text file on your disk. The Spreadsheet data format is the default setting, and other formats are described in the documentation. Set the Number of Arguments to 3, since we are reading in XYZ data here, and toggle on the Use space coordinates as arguments check-box since the data we are reading in is a function of position. Switch the Frame drop-down menu to Material. Why? Because we are going to be applying loads to a structural problem, and by setting the Frame to Material, we specify that the loads are applied in the original, undeformed, configuration rather than the deformed, or Spatial, frame.
Finally, you want to enter a Function name. Here we can use Fx, Fy, and Fz for the components of a force vector. The Position in file column specifies that these data are in the three columns after the space coordinates. Note that COMSOL automatically detects that there are three arguments and sets the Number of arguments field to 3 automatically.
Also note the Interpolation and Extrapolation settings. A linear interpolation method means that the spreadsheet data is mapped linearly from the source mesh points in the data file onto the destination mesh in COMSOL Multiphysics. If the COMSOL mesh lies outside of the space defined by the external data, then a constant extrapolation is used. These defaults are reasonable in most cases, and more details about the mapping are given in the documentation.
After you click the Import button, the form will look like this:
Wish to read in a new data file instead? You also have the option to Discard the data.
Now let us see how to use this point cloud data in a model. Let’s suppose we wish to compute stresses in an impeller. The loads in the data file we have just read in represent the fluid loads on the surface of the vanes. (We could just as easily have read in volumetric data, but in this example we have read in surface data.) The impeller model, and loads, are shown below:
Simulating loads on an impeller.
The blue arrows represent the loads read in from the file. Let’s take a look at how the boundary condition is defined here:
And that’s it! Just by calling Fx, Fy, and Fz we have used the loads defined in our text file. Here are the results, showing the stresses as well:
Stresses in an impeller.
To better illustrate probing, let’s use an example. We’ll show you how to use probes in the case of a transient thermal stress analysis. (Tip: Read our previous blog post for some background on thermal stress analysis.)
Let’s analyze a bipolar plate of a proton exchange membrane fuel cell (PEMFC). The geometry of this model is taken from the Model Library of the Structural Mechanics Module. The PEMFC is a candidate technology for future hydrogen-powered vehicles, but the very expensive materials needed have so far limited its use to high-cost applications, such as certain space missions. Fuel cell simulations contribute to designing less expensive fuel cells and making this technology more widely available. Fuel cell design requires extensive multiphysics simulations including stress, heat, CFD, and electrochemical analysis. We won’t look at a CFD and electrochemical analysis here, but instead show you a mechanical simulation. If you are interested in more advanced fuel cell simulations, check out the model tutorials of the Batteries & Fuel Cell Module.
A fuel cell stack consists of unit cells with an anode, a membrane, and a cathode connected in a series through bipolar plates. The bipolar plates also serve as gas distributors for hydrogen and air that is fed to the anode and cathode compartments, respectively. The picture below shows a schematic drawing of a fuel cell stack. The unit cell and the surrounding bipolar plates are shown in detail in the upper-right corner of the figure.
The fuel cell operates at temperatures just below 100°C (212°F), which means that it has to be heated at start-up. The heating process induces thermal stresses in the bipolar plates. The figure below shows the detailed model geometry. The plate consists of gas slits that form the gas channels in the cell, holes for the tie rods that keep the stack together, and the heating elements (in the picture labeled Heat source), which are positioned in the middle of the gas feeding channel for the electrodes.
In the COMSOL model of the bipolar plate, we use the Thermal Stress user interface and apply a volume heat source to the two central cylindrical domains.
There are also convective cooling boundary conditions applied to the sides of the plate.
A total power of 8 W is distributed over the two heating elements. We’ll ramp up the heat power by defining a look-up table that we name startup(), with time as input argument. The bracket notation with 1/s (see image below) converts the input argument t to a unitless quantity before being passed to the look-up table. This unit conversion is not required but it is a good practice to take charge over the unit handling.
The interpolation table is defined under Model Definitions and represents an increase from 0 to 1 over 10 seconds. We use the Piecewise cubic interpolation option to make sure we get a smooth curve. This curve will modulate the input power of 8 W over time so that at t=0 s the input power is 0 W and at t=10 s the input power is the final 8 W. We’ll see below that the time for the bipolar plate to reach steady-state is much longer than 10 seconds due to the relatively inefficient cooling:
Setting up probes is done from the Model Definitions node in the Model Tree of the COMSOL Desktop. In this case we’ll choose a Domain Point Probe (there are several other types of probes). By using the Point and surface normal Line entry method we get to interactively pick a point on a line that goes from another point on the model boundary with direction normal to the surface of the boundary. The depth can very easily be fine-tuned by the slider control. You also get the exact position of the point in x, y, and z coordinates.
The probe location is indicated with a red dot along the line. Here, an arbitrary point is picked on the inside of the plate:
Underneath the Domain Point Probe node in the Model Builder tree you will find a node called Point Probe Expression. Here you can set the field variable you wish to evaluate for and monitor during the solution process. The default for thermal stress analysis is the temperature field T but this could really be any field expression you’d like to monitor, including gradient (partial derivative) components. For example, the partial derivative of T with respect to X would be entered as d(T,X) or simply TX (notice the capital X).
In the Study Settings window, we can use the Range tool to specify a start and stop time. Units can be used here and we’ll solve for 10 hours, entered as 10[h].
Now let’s see how probes can be used to avoid storing the entire solution for a large number of time steps. In the Range tool we set the Number of values to 2. This is the smallest possible value and will make sure to only store the full solution at t=0 and t=10[h]. The accuracy of the time-dependent solver is controlled by the Relative tolerance. In this case, the tolerance is lowered from the original 0.01 to 0.0001 (see image above). The lower the tolerance, the shorter the time-steps taken by the solver. This will have an impact on our probe data. We can set the probe to be updated at the same time steps as taken internally by the solver. (The time-stepping algorithm used by COMSOL for this simulation is a so-called variable-order BDF method that adapts its steps in time, based on the solution and the tolerance settings.)
In the Study Settings window, in the section called Results While Solving, we change from the Output from solver option (which would just give us the results at t=0 and t=10[h]) to Steps taken by solver. Like this:
Select Compute from the Study node. During the solution, we’ll now see a Probe Plot and Probe Table of the Temperature while the solver is working. If you have a simulation that takes a long time to solve, you can use the displayed Probe information to check if you set something up wrong in the model. Doing so allows you to then stop the simulation before it’s finished, go back and change some settings, and start over again.
As you can see, the probe output has many more data points in time than just t=0 and t=10[h]. By lowering the solver tolerance setting you can increase the number of data points. The table allows you to copy-and-paste the results for use in a spreadsheet. You can also change the settings of the Probe Plot if you wish to use different units or ordering of the axes (here we get temperature T vs. time t).
As a next step we can of course also visualize the Temperature field at 10 hours (36,000 s) as well as the von Mises Stress with an overlaid mesh plot.
Visualization of Temperature field at 10 hours. | Visualization of von Mises Stress with overlaid mesh. |
Note: The mesh in this example has prismatic finite elements; triangles swept through the thickness of the plate.
CAD design of a pipe for fluid flow analysis. Only the solid is present and meshable.
Right click geometry and select Cap Faces.
Select the edges that are adjacent to the new cap.
The capped and filled geometry.
Let’s have a look at the physics settings for one of the models in the Model Library: Heat Sink with Surface-to-Surface Radiation. This model is interesting in its own right (think three modes of heat transfer) but I’ll focus on the icons.
Conjugate Heat Transfer node plus sub-nodes.
(Bonus Tip: I used the new “Sort by Space Dimension” feature.)
If you have experience with COMSOL, you probably recognize this structure. But have you noticed the details of the symbols?
Node icon designs indicate different space dimensions.
If a node has an icon with a fuchsia patch, you can be sure it’s a boundary condition. If it’s just the shaded “peanut” shape, it’s a domain (or 3D volume). How about some more info? The capital “D” indicates a default condition, which is automatically applied and cannot be deleted.
“D” means “Default”.
The icons are actually dynamic, meaning they change depending on what node is selected in the Model Builder. Check this out:
Selecting “Temperature 1” shows how this condition overrides “Thermal Insulation 1” and contributes with “Inlet 1”. Note how the icons change.
When I select the “Temperature 1” condition, a (soon-to-be-not) mysterious triangle appears near the bottom of the “Thermal Insulation 1” icon. This triangle and its position indicate that “Thermal Insulation 1” has been overridden by the selected node, namely “Temperature 1”. There is also a circle that appears next to “Inlet 1”. This indicates contribution. You can confirm all this in the setting window of “Temperature 1” by expanding the “Override and Contribution” section:
Contents of the “Override and Contribution” section of the “Temperature 1” node.
What if you now selected “Thermal Insulation 1” you ask? The icons change and look like this:
Icons change to indicate that “Thermal Insulation 1” is overridden by “Temperature 1” and “Outflow 1”.
Now the red triangle is on the “Temperature 1” and “Outlet 1” nodes, above the peanut-shaped icon. This indicates that both the “Temperature 1” and “Outlet 1” conditions override the selected node “Thermal Insulation 1”.
There’s even more to the story, but you can read all about it in the documentation. While you may actually get away without being aware of these features, as you advance in using COMSOL, knowing this can really be a time saver.
What’s your favorite COMSOL convenience feature? The Player or Report Generator? Feel free to post your experience to comments.
]]>Not the least of these enhancements are the “little things” — small usability improvements that can make life a lot easier (and modeling more efficient) for COMSOL users.
Take for instance buttons like “Build”, “Compute”, and “Plot”. These have been enlarged and given more descriptive names that will help new users (or those of us with bad aim) to get what they want faster.
Another useful feature is in the organization of the physics nodes in the Model Builder. You can now sort them by space dimension. That is, you can arrange them by Domains (3D), Boundaries (2D faces), Edges (1D segments), and Points. This can be very useful for mature models that require tighter organization for the sake of easy reference.
Before sorting:
Sort action:
After sorting:
Specification of parametric sweeps is quick and convenient in Version 4.3. The change to the interface is notable, with dropdowns listing all defined global parameters and defining which combinations of parameters you want (in the case of multi-dimension sweeps). The Range interface is nicer too, with an intuitive order of Start, Step, and Stop boxes. Units are now also supported in the Range interface.
The last “little” feature I’ll mention is the Word output format for the Report Generator. The Report Generator is always a crowd-pleaser when I show it at demos or during webinars. And now you can go directly from COMSOL to Word format. That includes equations, tables, and plots that you’ve come to expect in a COMSOL report.
Although these are little features, they can make a big difference to COMSOL users in the efficiency of their day-to-day work.
For an overview of all the new features, go to the COMSOL 4.3 Release Highlights.
]]>tresca_smsld
and mises_smsld
if you are modeling in 3D with the Structural Mechanics Module). Now all you need to do is enter sqrt(0.5*(tresca_smld^2+mises_smld^2))
in any of the Expression fields and click OK to see your new stress distribution.
You probably didn’t think of it, but in the expression I just mentioned, sqrt
, ^
, and even +
are all examples of operators. COMSOL offers a whole range of useful ones, not all equally obvious. Did you for instance know that the letter d
will differentiate any variable or expression with respect to time or space? d(c,z)
gives the derivative of a concentration c
with respect to the z
-coordinate. d(sqrt(0.5*(tresca_smld^2+mises_smld^2)),t)
is the time-derivative of your stress. If you have created your own subdomain expression my_stress
containing your stress definition, d(my_stress,t)
gives the same results.
The at
operator lets you access the solution at any time in postprocessing. This is handy if you want to see changes over a time interval. Plotting the expression at(20,p)-at(10,p)
overrides the Solution at time setting and shows you the pressure increase between 10 and 20 seconds. The with
operator lets you postprocess more than one parametric or eigensolution in a similar fashion.
Another handy pair of operators is up
and down
. They live on boundaries and help you evaluate anything with discontinuities. Consider for example a temperature gradient on a boundary between two subdomains with different conductivities. gradT_ht
will silently evaluate this gradient on both sides of the boundary and give you the average. With up(gradT_ht)
and down(gradT_ht)
however, you can decide which side you are interested in.
If you work with electromagnetics, you might have plotted the magnetic field in an eigenmode analysis only to find that it appears to be identically zero. Chances are it is non-zero but perfectly imaginary due to its 90-degree phase difference with a real-valued electric field. Use the imag
operator to show its imaginary part, abs
to plot the norm, or arg
to see the phase angle. Note that the default plot for complex fields shows the real part.
This is just the tip of the iceberg. You can find the complete list of mathematical and other operators in the COMSOL Multiphysics Quick Start and Quick Reference.
]]>
As far as I know, COMSOL is the first developer of multiphysics software to take this step towards removing the financial barrier to large scale simulation on clusters. This is important for many of our customers. At recent COMSOL conferences in Boston and Milan, we had hundreds of attendees taking part in my tutorial on how to run COMSOL on Windows HPC Server 2008.
The next big event is SC09 Portland, Oregon. If you are interested in HPC this is a great event to attend. And feel free to visit us there (at Booth # 236) – we’ll be happy to show you how to run COMSOL on clusters.
]]>