简体   繁体   English

MPI-IO 只从一个进程写入数据

[英]MPI-IO only writes data from one process

For some reason, MPI-IO is only writing the data from one of my processes out to a file.出于某种原因,MPI-IO 只是将我的一个进程中的数据写入文件。 I used MPI_File_open to open the file, MPI_File_set_view to set the view for each process, and MPI_File_write_all to write the data out.我使用 MPI_File_open 打开文件,使用 MPI_File_set_view 设置每个进程的视图,使用 MPI_File_write_all 写出数据。 When I run the code, everything seems to execute fine and without any error, but for some reason, the file output consists of garbled junk at the first line of the CSV file (it just says NULL NULL NULL a bunch and keeps repeating on the first line when I open it to read in VS Code), and the remainder of the file is the output for the second process block (since I'm using block decomposition on two processes). When I run the code, everything seems to execute fine and without any error, but for some reason, the file output consists of garbled junk at the first line of the CSV file (it just says NULL NULL NULL a bunch and keeps repeating on the当我打开它以读取 VS Code 时的第一行),文件的其余部分是第二个进程块的 output(因为我在两个进程上使用块分解)。 I can't seem to figure out why my program isn't outputting values correctly (or at least the first process) and I figured I'd ask on here.我似乎无法弄清楚为什么我的程序没有正确输出值(或者至少是第一个进程),我想我会在这里问。

I've attached the code here and omitted the parts that didn't apply to the problem at hand:我在这里附上了代码,并省略了不适用于手头问题的部分:

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include <time.h>
#include <mpi.h>


int main (int argc, char** argv) {

    int iproc, nproc;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &iproc);
    MPI_Comm_size(MPI_COMM_WORLD, &nproc);

    //Inputs:
    int depth = 3;
    float p_x_error = 0.05;
    float p_z_error = 0.05;

    int max_char[] = {128, 640, 328};
    int i_max_char = max_char[(depth%3)];

    int num_data_qubits = depth * depth;
    int data_qubit_x_error[ depth + 2][ depth + 2 ];
    int data_qubit_z_error[ depth + 2  ][ depth + 2 ];
    int ancilla_qubit_value[ depth + 1 ][ depth + 1 ];

    // Parallel block decomposition variables
    int total_num_iter = pow(4, num_data_qubits);           // Total number of outer loop iterations
    int block_size = floor(total_num_iter/nproc);       // Number of iterations per process (block)

    if (total_num_iter%nproc > 0) { block_size += 1; }  // Add 1 if blocks don't divide evenly

    int iter_first = iproc * block_size;
    int iter_last = iter_first + block_size;

    MPI_Status status;
    MPI_File fh;

    char buf[i_max_char];

    //Output:
    MPI_File_open(MPI_COMM_SELF, "testfile.csv", MPI_MODE_CREATE | MPI_MODE_WRONLY, MPI_INFO_NULL, &fh);
    MPI_File_set_view(fh, iproc * block_size * strlen(buf) * sizeof(char), MPI_CHAR, MPI_CHAR, "native", MPI_INFO_NULL);

    if(iproc == 0) {
        printf("Block size: %d\n", block_size);
    }

    for ( int i = iter_first; i < iter_last; i++ ) {

        // A bunch of stuff happens where values are written to the 2d arrays listed above

        char label_list[i_max_char];
        strcpy(label_list, "\n");
        char anc_name[3];

        // Output the ancilla qubit values in proper format
        int ancilla_value;
        for (int k=1; k < depth; k++) {
            if (k%2 == 0) {
                ancilla_value = (ancilla_qubit_value[depth][k] == 1) ? -1 : 1;
                sprintf(anc_name, "%d,", ancilla_value);
                strcat(label_list, anc_name);
            }
            for (int j=depth-1; j > 0; j--) {
                if (k == 1 && j%2 == 0) {
                    ancilla_value = (ancilla_qubit_value[j][k-1] == 1) ? -1 : 1;
                    sprintf(anc_name, "%d,", ancilla_value);
                    strcat(label_list, anc_name);
                } else if (k == (depth - 1) && j%2 == 1) {
                    ancilla_value = (ancilla_qubit_value[j][k+1] == 1) ? -1 : 1;
                    sprintf(anc_name, "%d,", ancilla_value);
                    strcat(label_list, anc_name);
                }
                ancilla_value = (ancilla_qubit_value[j][k] == 1) ? -1 : 1;
                sprintf(anc_name, "%d,", ancilla_value);
                strcat(label_list, anc_name);
            }
            if (k%2 == 1) {
                ancilla_value = (ancilla_qubit_value[0][k] == 1) ? -1 : 1;
                sprintf(anc_name, "%d,", ancilla_value);
                strcat(label_list, anc_name);
            }
        }

        // For printing label list:
        strcat(label_list, "\"[");
        char qubit_name[6];
        int first = 1;

        for (int k = 1; k < depth + 1; k++) {
            for (int j = depth; j > 0; j--) {
                if (data_qubit_x_error[j][k] == 1) {
                    if (first == 1) {
                        first = 0;
                    } else {
                        strcat(label_list, ", ");
                    }
                    sprintf(qubit_name, "'X%d%d'", (k-1), (depth-j));
                    strcat(label_list, qubit_name);
                }
                if (data_qubit_z_error[j][k] == 1) {
                    if (first == 1) {
                        first = 0;
                    } else {
                        strcat(label_list, ", ");
                    }
                    sprintf(qubit_name, "'Z%d%d'", (k-1), (depth-j));
                    strcat(label_list, qubit_name);
                }
            }
        }
        strcat(label_list, "]\"");

        MPI_File_write_all(fh, label_list, strlen(label_list) * sizeof(char), MPI_CHAR, MPI_STATUS_IGNORE);

    }

    MPI_File_close(&fh);
    MPI_Finalize();
    return 0;
}

After lots of digging, I finally found the answer.经过大量的挖掘,我终于找到了答案。 The value I used as the offset for MPI_File_set_view() was measuring the size of buf as 1 with strlen(buf) since the variable was initialized, but not populated.我用作 MPI_File_set_view() 偏移量的值是用 strlen(buf) 测量 buf 的大小为 1,因为变量已初始化,但未填充。 I remedied this by changing the offset value to (MPI_Offset) (iproc * block_size * i_max_char) so that the offset would be the correct length, which seems to have resolved the issue!我通过将偏移值更改为 (MPI_Offset) (iproc * block_size * i_max_char) 来解决这个问题,这样偏移量就会是正确的长度,这似乎已经解决了这个问题!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM