简体   繁体   English

如何使用c ++ API从Abaqus odb文件中提取比我的RAM大的数据(fieldOutputs)

[英]How can I extract Data(fieldOutputs) which is bigger than my RAM from an Abaqus odb-file using the c++ API

I am using the c++ API to access *.odb files. 我正在使用c ++ API访问* .odb文件。 Reading the file is no problem, unless the file is bigger than my RAM. 除非文件大于我的RAM,否则读取文件没有问题。

There are two routines in the documentation to read the data (in my case fieldOutputs) from the odb-file. 文档中有两个例程可以从odb文件读取数据(在我的情况下为fieldOutputs)。

1. Bulk data 1.批量数据

odb_FieldOutput& disp = lastFrame.fieldOutputs()["U"];
const odb_SequenceFieldBulkData& seqDispBulkData = disp.bulkDataBlocks();
int numDispBlocks = seqDispBulkData.size();
for (int iblock=0; iblock<numDispBlocks; iblock++) {
    const odb_FieldBulkData& bulkData = seqDispBulkData[iblock];
    int numNodes = bulkData.length();
    int numComp = bulkData.width();
    float* data = bulkData.data();
    int* nodeLabels = bulkData.nodeLabels();
    for (int node=0,pos=0; node<numNodes; node++) {
        int nodeLabel = nodeLabels[node];
        cout << "Node = " << nodeLabel;
        cout << " U = ";
        for (int comp=0;comp<numComp;comp++) {
            cout << data[pos++] << " ";
        }
        cout << endl;
    }
}

2 Value 2值

const odb_SequenceFieldValue& displacements =  lastFrame.fieldOutputs()["U"].values();
int numValues = displacements.size();
int numComp = 0;
for (int i=0; i<numValues; i++) {
    const odb_FieldValue val = displacements[i];
    cout << "Node = " << val.nodeLabel();
    const float* const U = val.data(numComp);
    cout << ", U = ";
    for (int comp=0;comp<numComp;comp++)
        cout << U[comp] << " ";
    }
    cout << endl;
}

What I would like to do is to read the Data from the file and save them into a mat file. 我想做的是从文件中读取数据并将其保存到mat文件中。

Shape of the data: 数据形状:

Odb-file is a data base which can be represented as a tree structure. Odb文件是可以表示为树结构的数据库。

It contains steps. 它包含步骤。 Each step contains frames and each frame contains fieldOutputs. 每个步骤包含框架,每个框架包含fieldOutputs。 Those fieldOutputs can be matrices or vectors. 这些fieldOutputs可以是矩阵或向量。 The dimension depends on the number of nodes and the number of parameters per fieldOutput. 维度取决于每个fieldOutput的节点数和参数数。

My question: 我的问题:

Is one of the mentioned routines capable of loading files bigger than the RAM successively? 上面提到的例程之一是否能够连续加载大于RAM的文件? If yes, I would be happy to get some hints. 如果是,我很乐意得到一些提示。

Additional information: 附加信息:

Documentation: http://abaqus.software.polimi.it/v6.12/books/ker/default.htm and http://xn--90ajn.xn--p1ai:2080/v6.12/pdf_books/SCRIPT_USER.pdf I am using Abaqus 6.12 and visual studio 2010 compiler. 文档: http://abaqus.software.polimi.it/v6.12/books/ker/default.htmHTTP://xn--90ajn.xn--p1ai:2080 / v6.12 / pdf_books / SCRIPT_USER。 pdf我正在使用Abaqus 6.12和Visual Studio 2010编译器。

Is one single field Output really bigger than your RAM? 一个字段输出真的比RAM大吗? Do you have >1 Billion Elements? 您是否有超过10亿个元素?

I think you are running over a large amount of field Outputs and running out of memory when doing so. 我认为您正在运行大量的字段输出,并且这样做会耗尽内存。

There you can run out of memory, because the Abaqus Odb API doesn't release memory correctly (to my observation). 在那里您可能会用完内存,因为Abaqus Odb API无法正确释放内存(据我观察)。 There are some undocumented functions to release memory in the C++ API, which i can provide if I find them. 有一些未记录的函数可以释放C ++ API中的内存,如果找到它们,我可以提供这些功能。

Even with this i could't get the API to release the memory. 即使这样,我也无法获得释放内存的API。 I got around this issue in (opening the Odb -> read a chunk of data -> close the Odb -> reopen the odb and read the next chunk of data) My observation was, that it would be helpful to wait 1 or two seconds after each chunk that the memory is being released properly. 我在(打开Odb->读取大量数据->关闭Odb->重新打开odb并读取下一个数据)中解决了这个问题,我的观察是,等待1或2秒钟会很有帮助在每个块之后,可以正确释放内存。

So reading the data chunk after chunk to Matlab (saving it in Matlab) would be a way to get it to work. 因此,将数据块逐块读取到Matlab(将其保存在Matlab中)将是使其工作的一种方式。

Of course the bulkData aproach would be favorable if you read whole field Outputs. 当然,如果您阅读整个字段的Outputs,bulkData方法将是有利的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM