Serious engineering and scientific computing problems involve working with large amounts of data — anywhere from megabytes to gigabytes of data are commonly generated during a COMSOL Multiphysics simulation. To generate and store this data, we will want to use computers with fast processors, a lot of Random Access Memory (RAM), and a large hard drive. To visualize this data, it is important to have a high-end graphics card.
The COMSOL Multiphysics workflow can be divided into four steps:
Under ideal conditions, everyone would be working on a high-end computer with more than enough memory and processing power for all of these steps. But realistically, we have to make do with what is available on each user’s desktop. If we need to solve larger models, we will want to access a shared computing resource across a network.
Here lies the issue: Passing data back and forth across a network is a lot slower than passing data around inside your computer, especially when it comes to the graphics-intensive postprocessing step. This becomes particularly apparent when using a virtual desktop application, which has to continuously send many megabytes of graphics data over the network. So, let’s see how the COMSOL Multiphysics Client-Server mode addresses this issue.
One COMSOL Multiphysics Floating Network License is used during Client-Server operation.
Users typically start COMSOL Multiphysics on their local desktop or laptop computer and start drawing the geometry, defining material properties, applying loads, and setting boundary conditions. Since this primarily involves selecting objects on the screen and typing, the computational requirements are quite low.
Once your users start meshing and solving larger models, however, they can quickly exceed their local computational resources. Meshing and solving takes both time and RAM. If a model requires more RAM than what is available locally, the computer will become quite unresponsive for some time. Rather than upgrading each computer, you can use Client-Server mode so that users can access a remote computing resource.
At any time while using COMSOL Multiphysics, it is possible to connect to a remote computing resource via the Client-Server mode of operation. This is a two-step process. First, log onto the remote system and invoke the COMSOL Multiphysics Server, which will start up the COMSOL Multiphysics Server process and open a network connection. Second, on the local machine, simply enter the network connection information into an open session of COMSOL Multiphysics. The software then transparently streams the model data and results back and forth over the network and uses the remote computing resource for all computations.
It is possible to disconnect from the COMSOL Multiphysics Server as long as it is not in the middle of a computation. This will free up the shared computing resource for other users. It can be good practice to do so during postprocessing, which primarily involves visualization of data and is less computationally intensive. Displaying the results is always handled locally, so it is important to have a good graphics card. (Tip: Check out this list of tested graphics cards.)
You can see that running your simulations in Client-Server mode will allow each part of your IT infrastructure do what it does best. You can run the COMSOL Multiphysics Server on your high-performance computing resources, but your users will be working on their local machines for graphics visualization. Other than the number of licenses that you have available, there is no limit to the number of simultaneous users running a server at any one time. In fact, you can run COMSOL Multiphysics in Client-Sever mode all the time. Preprocessing, meshing, solving, and any non-graphical postprocessing computations can all be done on the server. By taking advantage of your organization’s shared computing resources, your users will not need an upgrade of their desktop computer every time they want to run a larger COMSOL Multiphysics model.
The COMSOL Client-Server capabilities, available as part of the Floating Network License, allow you to run your large COMSOL Multiphysics models on the best computers that you have available, so you do not have to buy every user a large workstation. It is a great option for any organization.
From an administrative point of view, the Floating Network License (FNL) can be configured to control and restrict who uses the license, to know what it is used for, how long it is used for, or which licenses need to be upgraded. Similarly, it allows you to control who can borrow licenses, how many, and for how long.
The License Manager can also generate a report within the log file to disclose license request activity. This report will be useful for seeing who is most active and what modules are popular. With this information, you can identify critical modules required for high-priority projects and then utilize the License Manager to control these. This includes reserving a number of licenses to be used exclusively by the more important projects, but still allowing other projects to continue making progress. Restricting or reserving the use of specific modules or functionalities couldn’t be easier, and is done in the “Options” file.
One use for an FNL is running COMSOL Multiphysics on one network. Therefore, one of the benefits of using this license type is that you can easily install the software on as many machines on the network as you need. Even if you’re the only one using it, you will still be able to install it on your workstation or laptop, a cluster, and as many other machines you might access.
Alternatively, if a number of people on your team will need to access COMSOL Multiphysics, you may want to automate the installation. You, as the administrator, can provide an installation script that automatically installs COMSOL Multiphysics on their systems without the need for any user input. As long as the user can reach the license server, it is really easy to point to it and start modeling without the hassle of contacting IT to get an installation.
A COMSOL Multiphysics FNL allows several users to work with separate computational resources at different points during the day. For instance, you might want to create a model on your workstation in the morning, compute it on a High Performance Computing (HPC) cluster in the afternoon, and check the results on your laptop in the evening.
Through contacting the License Manager, you can create your model on your workstation in the morning, run it on the cluster through the afternoon, and check your results from home in the evening.
But what if you don’t have access to VPN at home, you need to travel, or use COMSOL Multiphysics somewhere else without internet access? There is a solution to this as well. You can borrow a license from the server and specify how long you want to borrow it, and it will be returned when you don’t need it anymore. The only thing you need to do is start your regular COMSOL Multiphysics session (on your laptop, for instance), set up how long the license is going to be borrowed for, and you are ready to disconnect and start simulating on the airplane.
Another benefit with an FNL is the ability to run COMSOL software remotely via a remote desktop application. Whether you’re connecting from your office or home, you can use remote desktop to set up your model and then reconnect later to verify the results. You’re not exhausting your laptop or workstation with working computations, and enables you to multitask.
The FNL allows for utilizing clusters to distribute tasks between the different computation units, thus increasing productivity and allowing for more resource demanding computations. It is very popular to use FNLs for HPC and embarrassingly parallel applications, as it makes use of the COMSOL software’s advanced parallel processing capabilities. You can utilize these capabilities by adding up your workstations, setting up a scheduler, and scheduling your jobs to be run after work hours.
It is worth noting that any FNL will allow for an unlimited number of nodes in your cluster to be utilized for your simulations. Regardless of whether we’re talking about the size of your models, the number of your co-workers or projects, or nodes in a cluster, our Floating Network License is truly scalable.
COMSOL uses the Finite Element method, which transforms the governing PDE into an integral equation — the weak form, in other words. Having a closer look at the COMSOL simulation software, you may realize that many boundary conditions are formulated in terms of integrals. A couple of examples of these are Total heat flux or floating potential. Integration also plays a key role in postprocessing, as COMSOL provides many derived values based on integration, like electric energy, flow rate, or total heat flux. Of course, our users can also use integration in COMSOL for their own means, and here you will learn how.
A general integral has the form
where [t_0,t_1] is a time interval, \Omega is a spatial domain, and F(u) is an arbitrary expression in the dependent variable u. The expression can include derivatives with respect to space and time or any other derived value.
The most convenient way to obtain integrals is to use the “Derived Values” in the Results section of the new ribbon (or the Model Builder if you’re not running Windows®).
How to add volume, surface, or line integrals as Derived Values.
You can refer to any available solution by choosing the corresponding data set. The Expression field is the integrand and allows for dependent or derived variables. For transient simulations, the spatial integral is evaluated at each time step. Alternatively, the settings window offers Data Series Operations, where Integration can be selected for the time domain. This results in space-time integration.
Example of Surface Integration Settings with additional time integration via the Data Series Operation.
The Average is another Derived Value related to integration. It equals an integral, which is divided by the volume, area, or length of the considered domain. The Average Data Series Operation additionally divides by the time horizon. Derived Values are very useful, but because they are only available for postprocessing, they cannot handle every type of integration. That is why COMSOL provides more powerful and flexible integration tools. We demonstrate these methods with an example model below.
We introduce a simple heat transfer model, a 2D aluminum unit square in the (x,y)-plane. The upper and right sides are fixed at room temperature (293.15 K) and on the left and lower boundary, a General inward heat flux of 5000W/m^2 is prescribed. A stationary solution and a time-dependent solution after 100 seconds are shown in the following figures.
Component Coupling Operators are, for example, needed when several integrals are combined in one expression, when integrals are requested during calculation, or in cases where a set of path integrals are required. Component Coupling Operators are defined in the Definitions section of the respective component. At that stage, the operator is not evaluated yet. Only its name and domain selection are fixed.
How to add Component Coupling Operators for later use.
For our example, we first want to calculate the spatial integral over the stationary temperature, which is given by
In the COMSOL software, we use an integration operator, which is named intop1 by default.
In the next step, we demonstrate how an Integration operator can also be used within the model. We could, for example, ask what heating power we need to apply to obtain an average temperature of 303.15 K, which equals an average temperature increase of 10 K compared to room temperature. First, we need to compute the difference between the desired and the actual average temperature. The average is calculated by the integral over T, divided by the integral over the constant function 1, which gives the area of the domain. Fortunately, this type of calculation can easily be done with an Average operator in COMSOL. By default, such an operator is named aveop1. (Note that the average over the domain is the same as the integral for our example. That is because the domain has unit area.) The corresponding difference is given by
Next, we need to find the General heat flux on the left and lower boundary, so that the desired average temperature is satisfied. To this end, we introduce an additional degree of freedom named q_hot and an additional constraint as a global equation. The General inward heat flux is replaced by q_hot.
How to add an additional degree of freedom and a global equation, which forces the average temperature to 303.15 K.
Solving this coupled system with a stationary study results in q_{hot}=5881.30 W/m^2. This value has to be prescribed as a General inward heat flux boundary condition to achieve an average temperature of 303.15 K in the whole domain.
A frequently asked question we receive in Support is: How can one obtain the spatial antiderivative? The following application of integration coupling answers this question. The antiderivative is the counterpart of the derivative, and geometrically, it enables the calculation of arbitrary areas bounded by function graphs. One important application is the calculation of probabilities in statistical analyses. To demonstrate this, we fix y=0 in our example and denote the antiderivative of T(x,0) by u(x). This means that \frac{\partial u}{\partial x}=T(x,0). A representation of the antiderivative is the following integral
where we use \bar x in order to distinguish the integration and the output variable. In contrast to the integrals above, we here have a function as a result, rather than a scalar quantity. We need to include the information that for each \bar x\in[0,1] the corresponding value of u(\bar x) requires an integral to be solved. Fortunately, this is easy to set up in the COMSOL environment and requires only three ingredients, so to speak. First, a logical expression can be used to reformulate the integral as
Second, we need an integration operator that acts on the lower boundary of our example domain. Let’s denote it by intop2. Third, we need to include the distinction of integration and output variable. The notation for this situation is source and destination for x and \bar x, respectively. When using an integration coupling operator, the built-in operator dest is available, which indicates that the corresponding expression does not belong to the integration variable. More precisely, it means \bar x=dest(x) in COMSOL. Putting the logical expression and the dest operator together, results in the expression T*(x<=dest(x)), which is exactly the input expression that we need for intop2. Altogether, we can calculate the antiderivative by intop2(T*(x<=dest(x))), resulting in the following plot in our example:
How to plot the antiderivative by Integration coupling, the dest operator, and a logical expression.
COMSOL provides two other integration coupling operators, namely general projection and linear projection. These can be used to obtain a set of path integrals in any direction of the domain. In other words, integration is performed only with respect to one dimension. The result is a function of one dimension less than the domain. For a 2D example the result is a 1D function, which can be evaluated on any boundary. Some more details on how to use these operators are subject to a forthcoming blog post on component couplings.
The most flexible way of spatial integration is to add an additional PDE interface. Let’s remember the example of the antiderivative and assume that we want to calculate the antidervative not only for y=0. The task can be formulated in terms of the PDE
with Dirichlet boundary condition u=0 on the left boundary. The easiest interface to implement this equation is the Coefficient Form PDE interface, which only needs the following few settings:
How to use an additional physics interface for spatial integration.
The dependent variable u represents the antiderivative with respect to x and is available during calculation and postprocessing. Besides flexibility, a further advantage of this method is accuracy, because the integral is not obtained as a derived value, but is part of the calculation and internal error estimation.
We have already mentioned the Data Series Operations, which can be used for time integration. Another very useful method for time integration is provided by the built-in operators timeint and timeavg for time integration or time average, respectively. They are readily available in postprocessing and are used to integrate any time-dependent expression over a specified time interval. In our example we may be interested in the temperature average between 90 seconds and 100 seconds, i.e.:
The following surface plot shows the resulting integral, which is a spatial function in (x,y):
How to use the built-in time integration operator timeavg.
Similar operators are available for integration on spherical objects, namely ballint, circint, diskint, and sphint.
If temporal integrals have to be available in the model, you need to define them as additional dependent variables. Similar to the Coefficient Form PDE example shown above, this can be done by adding an ODE interface of the Mathematics branch. Suppose, for example, that at each time step, the model requests the time integral from start until now over the total heat flux magnitude, which measures the accumulated energy. The variable for the total heat flux is automatically calculated by COMSOL and is named ht.tfluxMag. The integral can be calculated as an additional dependent variable with a Distributed ODE, which is a subnode of the Domain ODEs and DAEs interface. The source term of this domain ODE is the integrand, as shown in the following figure.
How to use an additional physics interface for temporal integration.
What is the benefit of such a calculation? The integral can be reused in another physics interface, which may be influenced by the accumulated energy in the system. Moreover, it is now available for all kinds of postprocessing, which is more convenient and faster than built-in operators. For an example, check out the Carbon Deposition in Hetereogeneous Catalysis model, where a domain ODE is used to calculate the porosity of a catalyst as a time-dependent field variable in the presence of chemical reactions.
So far, we have shown how to integrate solution variables during calculation or in postprocessing. We have not yet covered integrals of analytic functions or expressions. To this end, COMSOL provides the built-in operator integrate(expression, integration variable, lower bound, upper bound).
The expression might be any 1D function, such as sin(x). It is also possible to include additional variables, such as sin(x*y). The second argument specifies over which variable the integral is calculated. For example integrate(sin(x*y),y,0,1) yields a function in x, because integration only eliminates the integration variable y. Note that the operator can also handle analytic functions, which need to be defined in the Definitions node of the current component.
A spinning wheel will experience centrifugal stresses that result in stresses throughout the part. A regular pattern of holes has been cut into the wheel hub to reduce the mass. The von Mises stresses due to the centrifugal forces are shown. It is desirable to further reduce the mass, while keeping the stresses below a critical value.
Although we could model the entire wheel at once, there is both mirror and rotational symmetry in this part making it possible to reduce the model and thereby minimize the computational requirements. Symmetry boundary conditions are used to restrain the part.
A body load is applied in terms of the rotational velocity, rotational axis, and material density to model the centrifugal force. The model is solved using the stationary solver, that is, assuming a constant rotational speed.
In this case, let’s assume that there is already a manufacturing process in place, and we would like to make a minimal change to the overall design of the part in order to reduce retooling costs. A natural choice of design variables would be to change the radii of the holes in the hub. Therefore, we go back to the geometry sequence and parameterize both the hole radii as well as their locations. We can also figure out, based purely on a geometric analysis, that there must be bounds on the maximum radius of each hole, otherwise the regions between the holes would get too thin and the holes would overlap. We will also put a bound on the minimum radius, since we do not want the holes to disappear completely.
The optimization objective here will be simply to reduce the mass of the part, which is the integral of the material density over all domains.
The optimization objective is to minimize the mass, the integral of the density.
The constraint is a little bit more complex; we want to minimize the peak stress in the part. However, we do not know ahead of time where the peak stress will be. If we make either the inner or outer holes too small, this will lead to a stress concentration around the hole. If we make either of the radii too large, the material between the holes can get too thin, also leading to high stresses. Therefore, we must monitor the maximum stress throughout the part, and constrain this to be below a specified peak stress. This is a non-differentiable constraint, and it specifically requires the gradient-free optimization method.
The peak stress is monitored via a Domain Probe, and given the name PeakStress.
The peak stress variable is constrained to stay within an upper bound.
To solve the optimization problem, an Optimization feature is added to the Study Branch. The Nelder-Mead method is one of the two gradient-free methods (the other one is Coordinate Search). The gradient-free optimization algorithms also allow the geometry to remesh as the dimensions change.
The objective function and constraint is defined from the Optimization branch in the Model Tree. The control variables are given initial conditions, and we specify upper and lower bounds. The optimal design is significantly different — the mass is reduced by 20% while maintaining a constraint on the peak stress.
]]>
The new user interface makes computing such a coordinate system easy by solving a flow-like equation (or in some cases an elasticity equation) where you simply define an inlet and an outlet for the “flow” of your coordinate’s principal axis. Now, curvilinear coordinates that follow curved geometry objects aren’t necessarily uniquely defined. What happens for example if the cross section of your model has narrow parts? Is the anisotropic material of the model trimmed as if by a cookie cutter or is it squeezed together in the narrow parts? Does your object have sharp corners? Because of these ambiguities, three different methods are offered: Diffusion Method, Elasticity Method, and Flow Method. These all give slightly different coordinates corresponding to the underlying equation solved (the equations are outlined in the COMSOL Multiphysics Reference Manual). There is actually also a fourth method, User Defined, where you are free to type in your own mathematical expressions for the curvilinear coordinates’ principal vector field.
The curvilinear coordinates can be used not only for defining anisotropic materials, but for all kinds of other applications, such as electrical currents or visualization. In a new tutorial model of the Nonlinear Structural Materials Module, four different curvilinear coordinates interfaces are used to visualize the fiber directions of an anisotropic Holzapfel-Gasser-Ogden material model. This is a hyperelastic material model for biomechanics applications and is useful for representing collagenous soft tissue in arterial walls.
The figure shows the fiber orientation for different fiber families in the media and adventitia. Arteries have layered structures with the intima inside, followed by the media and the adventitia. The two outer layers are predominantly responsible for the mechanical behavior. Both layers are made of collagenous soft tissues that show prominent strain stiffening. Families of collagen fibers give each layer anisotropic properties. These fiber reinforced structures enable blood vessels to sustain large elastic deformation. The Holzapfel-Gasser-Ogden (HGO) constitutive model described in a literature reference captures the anisotropic nonlinear mechanical response observed in excised artery experiments. This model demonstrates how this hyperelastic material is used in COMSOL Multiphysics, and the results are compared to those reported in the literature. The anisotropic directions are visualized using the new curvilinear coordinates user interface.
Let’s now take a look at a very simple heat transfer example and how we can use the new tool to compute for the temperature in an S-shaped geometry with an anisotropic material that follows the shape. Similar structures are found in smartphones and flat screens and are used as passive heat sinks where a highly anisotropic thermal conductivity spreads heat laterally. This particular example is created in COMSOL by using the Sweep geometry operation to sweep a rectangular cross section along an S-shaped curve.
A rectangular cross section and a parametric curve used as a basis for a geometric sweep.
The final swept geometry.
We can, of course, also import a CAD file. Once we have created the geometry, we continue by adding a Curvilinear Coordinates user interface from the Mathematics branch of the Model Wizard. The next step is to define an Inlet and an Outlet boundary condition. This will define the principal direction of the coordinate system.
An Inlet boundary condition is used to define the source of the curvilinear coordinates’ principal direction.
The other directions are computed automatically but you can guide the definition of those directions by, for example, aligning it with one of the coordinate axes. In this example we use the Diffusion method. Also, if we would like an orthogonal coordinate system to be automatically defined, then we can select the “Create base vector” check-box, as seen in this screenshot:
The “Create base vector” check-box.
Now we can just solve and get a visualization of the coordinate system:
A visualization of the computed curvilinear coordinates.
In this example we also add a Heat Transfer in Solids user interface, and to get something interesting we create a disk (using a Work Plane and a Circle) at the top surface on which we defined a high temperature condition. At the same time, we assign a cold temperature condition along the entire bottom surface of the model. When using an anisotropic material with a high conductivity in one direction, this is the temperature profile we get:
Temperature field in an anisotropic material.
We can see that the heat is spreading in the length-direction of the shape.
You may ask: How do I create and reference an anisotropic material? The first step is to create the material under the Materials node in the Model Tree. In this case, the thermal conductivity in the local x-direction is high, while it is low in the local y- and z-directions. We can formally write this as: k_x >> (k_y=k_z).
Definition of an anisotropic thermal conductivity.
But this is just the definition in a local (abstract) coordinate system. Then we need to reference the curvilinear coordinates that were automatically computed. This is done in the Heat Transfer in Solids settings window:
Reference to the computed curvilinear coordinates.
By referencing the Curvilinear System in this way, the anisotropic thermal conductivity defined by k_x, k_y, and k_z above will “know how to bend” accordingly.
Finally, we need to make sure that we first solve for the curvilinear coordinates and then for the heat transfer in solids. (Otherwise the heat transfer in solids wouldn’t know what material to use.)
This is done by using two Studies. Study 1 is used to compute the curvilinear coordinates and Study 2 is used for the heat transfer in solids. These are the settings for Study 1:
The Study 1 settings.
The Study 2 settings are similar, but here we also need to define what to do with values of variables not solved for. This is the solution of Study 1 and we just reference that in the section called values of dependent variables:
The Study 2 Settings.
Now solve for Study 1 first and then Study 2. That’s it. We’ve seen how quickly we can set up a custom coordinate system for any shape. In this example we used a simple S-shaped geometry, but you can also try this now for geometry models with branches and other more complex geometrical features. Since this curvilinear coordinate method is computational, it works for any CAD model. Enjoy!
]]>
The Thin Elastic Layer boundary condition is suitable for modeling systems involving very thin elastic layers. This boundary condition has both elastic and damping properties and acts between two parts to model a thin elastic layer with specified stiffness and damping properties. On an interior boundary, the thin elastic layer decouples the displacements between two sides of the boundary. The two boundaries are then connected by elastic and viscous forces with equal size but opposite directions, proportional to the relative displacements and velocities. This means that you can replace any thin elastic layer in your geometry with just an interior surface with zero thickness, then use the Thin Elastic Layer boundary condition on the interior surface and assign it the properties of the elastic layer. You can include stiffness and damping values of the elastic layer. The Thin Elastic Layer boundary condition can be used in any type of analysis, including stationary, time dependent, frequency domain, and eigenfrequency.
Thin Elastic Layer boundary condition settings.
Next, let’s analyze an application that can benefit from the use of the Thin Elastic Layer boundary condition. Consider a composite piezoelectric transducer, for example. The composite piezoelectric transducer consists of piezoceramic, aluminum, and adhesive layers. The adhesive layer binds the piezoceramic and aluminum layers. In the simulation considered here, the length of the longest side of the adhesive layer is 27.5 mm and it can be 0.02 mm thick. Thus, the aspect ratio of the adhesive layer is quite high — it’s equal to 1.38 X 10^{3}. The piezoceramic and aluminum layers have the same length for the longest side, but are far thicker, with values equal to 10 mm and 5 mm, respectively. An AC potential is applied on the electrode surfaces of both sides of the piezoceramic layer.
The objective of the following simulation is to find the first six eigenfrequencies of this composite piezoelectric transducer without explicitly drawing and meshing the thin adhesive layer. Instead we will consider its effect by using the Thin Elastic Layer boundary condition available in the MEMS, Structural Mechanics, and Acoustics Modules. This transducer model is taken from the Model Library of the MEMS Module where the adhesive is modeled explicitly for a thicker adhesive layer. The figure below shows that the geometry containing a very thin adhesive layer is replaced by a geometry with just an interior boundary, which is then assigned a Thin Elastic Layer boundary condition.
The Piezoelectric Device interface is used to simulate the composite structure. The boundary conditions consist of an applied potential and ground boundary conditions on both sides of the piezoceramic layer. The inner boundary representing the adhesive layer is assigned the Thin Elastic Layer boundary condition. Given the Young’s modulus (E), Poisson’s ratio (v), and the thickness (t) of the thin elastic layer, the spring constant per unit area in directions normal (k_{n}) and tangential (k_{t}) to the boundary can be estimated as: and , where G is the Shear modulus. Note the two asymptotes for k_{n}: if v is low or closer to 0, , and if v is high or closer to 0.5, , where K is the bulk modulus. The values of normal and tangential stiffness per unit area are used for spring constant per unit area in the Thin Elastic Layer boundary condition.
An eigenfrequency calculation is performed for values of the thickness of the adhesive layer ranging between 0.01 mm and 0.38 mm.
The following results show a comparison of six eigenfrequencies calculated using the Thin Elastic Layer boundary condition against the eigenfrequencies calculated by modeling the adhesive layer explicitly. The eigenfrequencies calculated using the Thin Elastic Layer boundary condition are indicated by an asterisk marker on solid lines, and the eigenfrequencies calculated by modeling the adhesive layer explicitly are indicated by dashed lines without any markers.
The above results show that for lower adhesive layer thickness values the solution obtained by using the Thin Elastic Layer boundary condition closely matches the solution obtained by modeling the adhesive explicitly. For large adhesive layer thickness values, the differences between the solution obtained from two calculations will increase with increased adhesive layer thickness. A sufficiently thick adhesive or elastic layer can be modeled explicitly. However, for very low values of the adhesive layer thickness or for simulating any thin elastic layer, the Thin Elastic Layer boundary condition is a more efficient way of modeling the effect of the thin layer without modeling the thin layer explicitly, while still accounting for its thickness and elastic properties.
There are several other important uses of the Thin Elastic Layer boundary condition. One such application is for large CAD models represented by a COMSOL Assembly, where the Thin Elastic Layer boundary condition can be used to approximate mechanical contact conditions for frequency-domain studies. Other uses, typically in combination with using a nonlinear stiffness, include modeling of fracture zones in geomechanics, simplified mechanical contact, and delamination of adhesive layers.
]]>
Let us start by looking at some data that was generated in another analysis package:
%X | Y | Z | VectorX | VectorY | VectorZ |
-0.03041 | 0.013353 | 0.138253 | 0.001493 | 0.003518 | -0.00302 |
-0.03862 | 0.01627 | 0.137537 | 0.001332 | 0.003296 | -0.00329 |
-0.0355 . . . |
0.010981 | 0.132823 | 9.60E-04 | 0.00287 | -0.00287 |
The first line you see is a header for the columnar data. We have XYZ data, and at each of these points we have the x-, y-, and z-components of a vector, which is a force that we will want to read into COMSOL Multiphysics. The remaining rows of the file are the point cloud data.
To read in this data, go to Model > Definitions > Functions and define a new Interpolation function. Like this:
Next you want to fill in the form as shown below:
Set the Data Source to File, and use the Browse button to locate the text file on your disk. The Spreadsheet data format is the default setting, and other formats are described in the documentation. Set the Number of Arguments to 3, since we are reading in XYZ data here, and toggle on the Use space coordinates as arguments check-box since the data we are reading in is a function of position. Switch the Frame drop-down menu to Material. Why? Because we are going to be applying loads to a structural problem, and by setting the Frame to Material, we specify that the loads are applied in the original, undeformed, configuration rather than the deformed, or Spatial, frame.
Finally, you want to enter a Function name. Here we can use Fx, Fy, and Fz for the components of a force vector. The Position in file column specifies that these data are in the three columns after the space coordinates. Note that COMSOL automatically detects that there are three arguments and sets the Number of arguments field to 3 automatically.
Also note the Interpolation and Extrapolation settings. A linear interpolation method means that the spreadsheet data is mapped linearly from the source mesh points in the data file onto the destination mesh in COMSOL Multiphysics. If the COMSOL mesh lies outside of the space defined by the external data, then a constant extrapolation is used. These defaults are reasonable in most cases, and more details about the mapping are given in the documentation.
After you click the Import button, the form will look like this:
Wish to read in a new data file instead? You also have the option to Discard the data.
Now let us see how to use this point cloud data in a model. Let’s suppose we wish to compute stresses in an impeller. The loads in the data file we have just read in represent the fluid loads on the surface of the vanes. (We could just as easily have read in volumetric data, but in this example we have read in surface data.) The impeller model, and loads, are shown below:
Simulating loads on an impeller.
The blue arrows represent the loads read in from the file. Let’s take a look at how the boundary condition is defined here:
And that’s it! Just by calling Fx, Fy, and Fz we have used the loads defined in our text file. Here are the results, showing the stresses as well:
Stresses in an impeller.
To better illustrate probing, let’s use an example. We’ll show you how to use probes in the case of a transient thermal stress analysis. (Tip: Read our previous blog post for some background on thermal stress analysis.)
Let’s analyze a bipolar plate of a proton exchange membrane fuel cell (PEMFC). The geometry of this model is taken from the Model Library of the Structural Mechanics Module. The PEMFC is a candidate technology for future hydrogen-powered vehicles, but the very expensive materials needed have so far limited its use to high-cost applications, such as certain space missions. Fuel cell simulations contribute to designing less expensive fuel cells and making this technology more widely available. Fuel cell design requires extensive multiphysics simulations including stress, heat, CFD, and electrochemical analysis. We won’t look at a CFD and electrochemical analysis here, but instead show you a mechanical simulation. If you are interested in more advanced fuel cell simulations, check out the model tutorials of the Batteries & Fuel Cell Module.
A fuel cell stack consists of unit cells with an anode, a membrane, and a cathode connected in a series through bipolar plates. The bipolar plates also serve as gas distributors for hydrogen and air that is fed to the anode and cathode compartments, respectively. The picture below shows a schematic drawing of a fuel cell stack. The unit cell and the surrounding bipolar plates are shown in detail in the upper-right corner of the figure.
The fuel cell operates at temperatures just below 100°C (212°F), which means that it has to be heated at start-up. The heating process induces thermal stresses in the bipolar plates. The figure below shows the detailed model geometry. The plate consists of gas slits that form the gas channels in the cell, holes for the tie rods that keep the stack together, and the heating elements (in the picture labeled Heat source), which are positioned in the middle of the gas feeding channel for the electrodes.
In the COMSOL model of the bipolar plate, we use the Thermal Stress user interface and apply a volume heat source to the two central cylindrical domains.
There are also convective cooling boundary conditions applied to the sides of the plate.
A total power of 8 W is distributed over the two heating elements. We’ll ramp up the heat power by defining a look-up table that we name startup(), with time as input argument. The bracket notation with 1/s (see image below) converts the input argument t to a unitless quantity before being passed to the look-up table. This unit conversion is not required but it is a good practice to take charge over the unit handling.
The interpolation table is defined under Model Definitions and represents an increase from 0 to 1 over 10 seconds. We use the Piecewise cubic interpolation option to make sure we get a smooth curve. This curve will modulate the input power of 8 W over time so that at t=0 s the input power is 0 W and at t=10 s the input power is the final 8 W. We’ll see below that the time for the bipolar plate to reach steady-state is much longer than 10 seconds due to the relatively inefficient cooling:
Setting up probes is done from the Model Definitions node in the Model Tree of the COMSOL Desktop. In this case we’ll choose a Domain Point Probe (there are several other types of probes). By using the Point and surface normal Line entry method we get to interactively pick a point on a line that goes from another point on the model boundary with direction normal to the surface of the boundary. The depth can very easily be fine-tuned by the slider control. You also get the exact position of the point in x, y, and z coordinates.
The probe location is indicated with a red dot along the line. Here, an arbitrary point is picked on the inside of the plate:
Underneath the Domain Point Probe node in the Model Builder tree you will find a node called Point Probe Expression. Here you can set the field variable you wish to evaluate for and monitor during the solution process. The default for thermal stress analysis is the temperature field T but this could really be any field expression you’d like to monitor, including gradient (partial derivative) components. For example, the partial derivative of T with respect to X would be entered as d(T,X) or simply TX (notice the capital X).
In the Study Settings window, we can use the Range tool to specify a start and stop time. Units can be used here and we’ll solve for 10 hours, entered as 10[h].
Now let’s see how probes can be used to avoid storing the entire solution for a large number of time steps. In the Range tool we set the Number of values to 2. This is the smallest possible value and will make sure to only store the full solution at t=0 and t=10[h]. The accuracy of the time-dependent solver is controlled by the Relative tolerance. In this case, the tolerance is lowered from the original 0.01 to 0.0001 (see image above). The lower the tolerance, the shorter the time-steps taken by the solver. This will have an impact on our probe data. We can set the probe to be updated at the same time steps as taken internally by the solver. (The time-stepping algorithm used by COMSOL for this simulation is a so-called variable-order BDF method that adapts its steps in time, based on the solution and the tolerance settings.)
In the Study Settings window, in the section called Results While Solving, we change from the Output from solver option (which would just give us the results at t=0 and t=10[h]) to Steps taken by solver. Like this:
Select Compute from the Study node. During the solution, we’ll now see a Probe Plot and Probe Table of the Temperature while the solver is working. If you have a simulation that takes a long time to solve, you can use the displayed Probe information to check if you set something up wrong in the model. Doing so allows you to then stop the simulation before it’s finished, go back and change some settings, and start over again.
As you can see, the probe output has many more data points in time than just t=0 and t=10[h]. By lowering the solver tolerance setting you can increase the number of data points. The table allows you to copy-and-paste the results for use in a spreadsheet. You can also change the settings of the Probe Plot if you wish to use different units or ordering of the axes (here we get temperature T vs. time t).
As a next step we can of course also visualize the Temperature field at 10 hours (36,000 s) as well as the von Mises Stress with an overlaid mesh plot.
Visualization of Temperature field at 10 hours. | Visualization of von Mises Stress with overlaid mesh. |
Note: The mesh in this example has prismatic finite elements; triangles swept through the thickness of the plate.
CAD design of a pipe for fluid flow analysis. Only the solid is present and meshable.
Right click geometry and select Cap Faces.
Select the edges that are adjacent to the new cap.
The capped and filled geometry.
Let’s have a look at the physics settings for one of the models in the Model Library: Heat Sink with Surface-to-Surface Radiation. This model is interesting in its own right (think three modes of heat transfer) but I’ll focus on the icons.
Conjugate Heat Transfer node plus sub-nodes.
(Bonus Tip: I used the new “Sort by Space Dimension” feature.)
If you have experience with COMSOL, you probably recognize this structure. But have you noticed the details of the symbols?
Node icon designs indicate different space dimensions.
If a node has an icon with a fuchsia patch, you can be sure it’s a boundary condition. If it’s just the shaded “peanut” shape, it’s a domain (or 3D volume). How about some more info? The capital “D” indicates a default condition, which is automatically applied and cannot be deleted.
“D” means “Default”.
The icons are actually dynamic, meaning they change depending on what node is selected in the Model Builder. Check this out:
Selecting “Temperature 1” shows how this condition overrides “Thermal Insulation 1” and contributes with “Inlet 1”. Note how the icons change.
When I select the “Temperature 1” condition, a (soon-to-be-not) mysterious triangle appears near the bottom of the “Thermal Insulation 1” icon. This triangle and its position indicate that “Thermal Insulation 1” has been overridden by the selected node, namely “Temperature 1”. There is also a circle that appears next to “Inlet 1”. This indicates contribution. You can confirm all this in the setting window of “Temperature 1” by expanding the “Override and Contribution” section:
Contents of the “Override and Contribution” section of the “Temperature 1” node.
What if you now selected “Thermal Insulation 1” you ask? The icons change and look like this:
Icons change to indicate that “Thermal Insulation 1” is overridden by “Temperature 1” and “Outflow 1”.
Now the red triangle is on the “Temperature 1” and “Outlet 1” nodes, above the peanut-shaped icon. This indicates that both the “Temperature 1” and “Outlet 1” conditions override the selected node “Thermal Insulation 1”.
There’s even more to the story, but you can read all about it in the documentation. While you may actually get away without being aware of these features, as you advance in using COMSOL, knowing this can really be a time saver.
What’s your favorite COMSOL convenience feature? The Player or Report Generator? Feel free to post your experience to comments.
]]>
If you have seen one of our ads or magazine prints, you’ll notice that we tend to incorporate actual multiphysics models. When we create these high resolution images we use a simple trick, which is also useful to anyone looking to use a high-res image of their model in an article or paper.
Here’s how, in two ways:
A dialog box will appear where you can define the image quality and image target (clipboard or file). First specify the DPI value for the intended hardcopy output. For instance, if you intend to print it on a 300 DPI printer, set 300 DPI. This will adjust font sizes and lines to correspond with the size of the hardcopy. Then, set the resolution. If you are using the image for an article or high quality printed material, I recommended setting the width to 4,096 pixels, the height to 4,096 pixels, and the DPI to 300. This will produce a high-res image that is 34.7×34.7 cm (~13.66×13.66 in) in a 300 DPI hardcopy output.
You can also find this tip in the support knowledge base.
]]>