The first step in POST1 is to read data from the results file into the database. To do so, model data (nodes, elements, etc.) must exist in the database. If the database does not already contain model data, issue the RESUME command to read the database file, Jobname.db. The database should contain the same model for which the solution was calculated, including the element types, nodes, elements, element real constants, material properties, and nodal coordinate systems.
Caution: The database should contain the same set of selected nodes and elements that were selected for the solution. Otherwise, a data mismatch can occur. For more information about data mismatches, see Appending Data to the Database.
Table 7.1: Saving the Database Properly for Postprocessing
When postprocessing analysis results are obtained using... | Save the database: |
---|---|
Linear perturbation procedures |
After solving. Reason: The node coordinates are updated with the base analysis displacements during the solution. |
Contact analysis |
After solving. Reason: The nodal connectivity of contact elements is updated and internal nodes are added during the solution. |
ELBOW290 elements |
After the first solution. Reason: The nodal connectivity of elbow elements is updated and internal nodes are added during the solution. |
SHELL294 elements |
After the first solution. Reason: Internal nodes are added during the solution. |
After model data are in the database, load the results data from the results file by issuing one of the following commands: SET, SUBSET, or APPEND.
SET reads results data over the entire model from the results file into the database for a given loading condition, replacing any data previously stored in the database. SET arguments identify the data to be read into the database:
SET, Lstep ,
Sbstep , Fact ,
KIMG , TIME ,
ANGLE , NSET ,
ORDER |
Boundary condition data (constraints and force loads) is also read, but only if either element nodal loads or reaction loads are available. (See OUTRES for more information.) If they are not available, no boundary conditions will be available for listing or plotting. Only constraints and forces are read; surface and body loads are not updated and remain at their last specified value. If the surface and body loads were specified via tabular boundary conditions, however, they will reflect the values corresponding to this results set. Loading conditions are identified either by load step and substep or by time (or frequency).
Example 7.1: Specifying Results Data to Read
SET,2,5 ! Reads results for load step 2, substep 5 SET,,,,,3.89 ! Reads in results at TIME = 3.89 (or ! frequency = 3.89 depending on the analysis type)
If you specify a TIME
value for which no results
are available, the program performs linear interpolation to calculate results
for the specified time. For a nonlinear analysis, interpolation between time
points usually degrades accuracy; therefore, postprocess at a
TIME
value for which a solution is
available.
Some convenience operations are also available:
Lstep
= FIRST reads in the first substep.Lstep
= NEXT reads in the next substep.Lstep
= LAST reads in the last substep.Lstep
= LIST lists the data set number along with its corresponding load step and substep numbers.You can specify the data set number
NSET
(retrieved viaLstep
= LIST) to request a specific set of results.ANGLE
specifies the circumferential location for harmonic elements (structural PLANE25, PLANE83, SHELL61, and thermal PLANE75, PLANE78).
You can postprocess results without reading in the results data if the solution results were saved to the database file (Jobname.db).
Caution: Although you can sometimes omit reading results via SET if the results from the most recently solved time point are already in the database, the results may not be up to date. For example, if OUTRES was issued to write data at a prior time but not for the last time point, data for the prior time point will populate the database. Likewise, if a contact element was in contact at a prior time but not in contact (far-field) at the last time point, contact results for the prior time point will populate the database. In such cases, issuing SET is necessary to ensure that the most current results are used for postprocessing.
Analyses that use distributed-memory parallel (DMP) processing can only postprocess using the results file (Jobname.rst), as no solution results are written to the database. If using DMP, issuing SET is necessary before postprocessing.
Other commands also enable you to retrieve results data:
INRES in POST1 is a companion to OUTRES in the PREP7 and SOLUTION processors. Where OUTRES controls data written to the database and the results file, INRES defines the type of data to be retrieved from the results file (for placement into the database via commands such as SET, SUBSET, and APPEND).
Although not required for postprocessing, INRES limits the amount of data retrieved and written to the database. As a result, postprocessing may require less time.
To read a data set from the results file into the database for the selected portions of the model only, issue SUBSET. Data not specified for retrieval (INRES) is listed as having a zero value.
SUBSET behaves like SET except that it retrieves data for the selected portions of the model only. It is convenient to issue SUBSET to examine results data for a portion of the model. For example, if you are interested only in surface results, you can select the exterior nodes and elements, then issue SUBSET to retrieve results data for just those selected items.
Each time you issue SET or SUBSET, the program writes a new set of data over the data currently in the database. APPEND reads a data set from the results file and merges it with the existing data in the database, for the selected model only.
You can issue SET, SUBSET, or APPEND to read data from the results file into the database. The only difference between the commands or paths is how much or what type of data you want to retrieve.
To clear the database of any previous data, issue LCZERO. The command gives you a fresh start for further data storage.
If you set the database to zero before appending data to it, the result is the same as issuing SUBSET, assuming that the arguments on SUBSET and APPEND are equivalent.
All options available for SET are also available for SUBSET and APPEND.
By default, SET, SUBSET, and APPEND look for one of these results files: Jobname.rst, Jobname.rth, or Jobname.rmg. You can specify a different file name (FILE) before issuing SET, SUBSET, or APPEND.
When appending data (APPEND), use care not to generate a data mismatch.
Example 7.2: Data Mismatch
/POST1 INRES,NSOL ! Flag data from nodal DOF solution NSEL,S,NODE,,1,5 ! Select nodes 1 to 5 SUBSET,1 ! Write data from load step 1 to database
At this point, results data for nodes 1 to 5 from load step 1 are in the database.
NSEL,S,NODE,,6,10 ! Select nodes 6 to 10 APPEND,2 ! Merge data from load step 2 into database NSEL,S,NODE,,1,10 ! Select nodes 1 to 10 PRNSOL,DOF ! Print nodal DOF solution results
The database now contains data for both load steps 1 and 2. This is a data mismatch.
When printing nodal solution results (PRNSOL), the program indicates that you will have data from the second load step, when actually data from two different load steps now exist in the database. The load step listed is merely the one corresponding to the most recent load step stored.
Appending data to the database is helpful if you wish to compare results from different load steps; however, if you purposely intend to mix data, it is crucial to keep track of the source of the data appended.
To avoid data mismatches when solving a subset of a model that was solved previously using a different set of elements:
Do not reselect any of the elements that were deselected for the solution currently being postprocessed, or
Remove the earlier solution from the database (by exiting from the program between solutions or by saving the database between solutions).
The element table serves two functions:
It is a tool for performing arithmetic operations among results data.
It allows access to certain element results data that are not otherwise directly accessible, such as derived data for structural line elements. (Although the SET, SUBSET, and APPEND commands read all requested results items into the database, not all data are directly accessible via commands such as PLNSOL, PLESOL, etc.).
Think of the element table as a spreadsheet, where each row represents an element, and each column represents a particular data item for the elements. For example, one column might contain the average SX stress for the elements, while another might contain the element volumes, while yet a third might contain the Y coordinate of the centroid for each element.
To create or erase the element table, issue the ETABLE command.
To identify an element table column, assign a label to it via the
Lab
argument on the ETABLE
command. The label is used as the identifier for all subsequent POST1 commands
involving this variable. The data to fill the columns is identified by an
Item
name and a Comp
(component) name, the other two arguments on the ETABLE
command. For example, for the SX stresses, SX could be the
Lab
, S would be the
Item
, and X would be the
Comp
argument.
Some items, such as the element volumes, do not require
Comp
; in such cases,
Item
is VOLU and Comp
is left blank. Identifying data items by an Item
, and
Comp
if necessary, is called the "Component Name"
method of filling the element table. The data which are accessible with the
component name method are data generally calculated for most element types or
groups of element types.
The ETABLE command documentation lists, in general, all the
Item
and Comp
combinations. See the "Element Output Definitions" table in each element
description in the Element Reference to see which combinations are valid.
Table 188.1: BEAM188 Element Output Definitions is an example of such a
table for BEAM188. You can use any name in the
Name column of the table that contains a
colon (:) to fill the element table via the Component Name method. The portion
of the name before the colon should be input
for the Item
argument of the
ETABLE command. The portion (if any) after the colon should be input for the
Comp
argument. The O and R columns indicate the
availability of the items in the file Jobname.out
(O) or in the results file (R): a Y indicates
that the item is always available, a number
refers to a table footnote which describes when the item is conditionally available, and a - indicates that the
item is not available.
You can load data that is not averaged, or that is not naturally single-valued for each element, into the element table. This type of data includes integration point data, all derived data for structural line elements (such as spars, beams, and pipes) and contact elements, all derived data for thermal line elements, layer data for layered elements, etc. These data are listed in the "Item and Sequence Numbers for the ETABLE and ESOL Commands" table with each element type description in the Command Reference. Table 188.2: BEAM188 Item and Sequence Numbers is an example of such a table for BEAM188.
The data in the tables is broken down into item
groups, such as LS, LEPEL, SMISC, etc. Each item within the item
group has an identifying "sequence" number listed. You load these data into the
element table by giving the item group as the Item
argument on the ETABLE command, and the sequence number as
the Comp
argument. This is referred to as the
"Sequence Number" method of filling the element table.
For some line elements, KEYOPT settings govern the amount of data calculated. This can change the sequence number of a particular data item. Therefore, in these cases a table for each KEYOPT setting is provided.
The ETABLE command works only on the selected elements. That is, only data for the elements you have selected are moved to the element table. By changing the selected elements between ETABLE commands, you can selectively fill rows of the element table.
The same Sequence Number combination may mean different data for different element types. For example, the combination SMISC,1 means P1 for SOLID185 (pressure on face 1), and MECHPOWER for TRANS126 (mechanical power); therefore, if your model has a combination of element types, select elements of one type (ESEL) before issuing the ETABLE command.
The element table is not automatically refilled (updated) when you read in a different set of results (such as for a different load step) or when you alter the results in the database (such as by a load case combination). For example, suppose your model consists of our sample elements, and you issue the following commands in POST1:
SET,1 ! Read in results for load step 1 ETABLE,ABC,lS,6 ! Move SDIR at end J (KEYOPT(9)=0) to the element table ! under heading "ABC" SET,2 ! Read in results for load step 2
At this point, the "ABC" column in the element table still contains data for load step 1. To refill (update) the
column with load step 2 values, issue an
ETABLE,REFL
command.
You can use the element table as a "worksheet" to do calculations among results data. This feature is described in Additional POST1 Postprocessing.
To save the element table, issue SAVE,
Fname
,Ext
in POST1 or issue /EXIT,ALL when exiting the program. This saves the table along with the rest of the database onto the database file.To erase the entire element table from memory, issue ETABLE,ERASE. (Or issue ETABLE,
Lab
,ERASE
to erase just theLab
column of the element table.) The RESET command erases the element table from memory.
Principal stresses for SHELL61 elements are not readily available for review in POST1. By default, the principal stresses are available for all line elements except in either of the following cases:
You have requested an interpolated time point or angle specification on the SET command.
You have performed load case operations.
In the above cases (including all cases for SHELL61), you must issue the command LCOPER,LPRIN in order to calculate the principal stresses. You can then access this data via ETABLE or any appropriate printing or plotting command.
The RESET command reinitializes the POST1 command defaults portion of the database without exiting POST1. The command has the same effect as leaving and re-entering the program.