[英]Slower parallel program with OpenMP and PThreads than sequential
對於矩陣乘法的以下程序的並行化,我遇到了問題。 優化版本比順序版本慢或快幾個。 我已經准備好尋找錯誤,但是找不到它……我也在另一台機器上進行了測試,但是得到了相同的結果……
感謝您的幫助
主要:
int main(int argc, char** argv){
if((matrixA).size != (matrixB).size){
fprintf(ResultFile,"\tError for %s and %s - Matrix A and B are not of the same size ...\n", argv[1], argv[2]);
}
else{
allocateResultMatrix(&resultMatrix, matrixA.size, 0);
if(*argv[5] == '1'){ /* Sequentielle Ausfuehrung */
begin = clock();
matrixMultSeq(&matrixA, &matrixB, &resultMatrix);
end = clock();
};
if(*argv[5] == '2'){ /* Ausfuehrung mit OpenMP */
printf("Max number of threads: %i \n",omp_get_max_threads());
begin = clock();
matrixMultOmp(&matrixA, &matrixB, &resultMatrix);
end = clock();
};
if(*argv[5] == '3'){ /* Ausführung mittels PThreads */
pthread_t threads[NUMTHREADS];
pthread_attr_t attr;
int i;
struct parameter arg[NUMTHREADS];
pthread_attr_init(&attr); /* Attribut initialisieren */
begin = clock();
for(i=0; i<NUMTHREADS; i++){ /* Initialisierung der einzelnen Threads */
arg[i].id = i;
arg[i].num_threads = NUMTHREADS;
arg[i].dimension = matrixA.size;
arg[i].matrixA = &matrixA;
arg[i].matrixB = &matrixB;
arg[i].resultMatrix = &resultMatrix;
pthread_create(&threads[i], &attr, worker, (void *)(&arg[i]));
}
pthread_attr_destroy(&attr);
for(i=0; i<NUMTHREADS; i++){ /* Warten auf Rückkehr der Threads */
pthread_join(threads[i], NULL);
}
end = clock();
}
t=end - begin;
t/=CLOCKS_PER_SEC;
if(*argv[5] == '1')
fprintf(ResultFile, "\tTime for sequential multiplication: %0.10f seconds\n\n", t);
if(*argv[5] == '2')
fprintf(ResultFile, "\tTime for OpenMP multiplication: %0.10f seconds\n\n", t);
if(*argv[5] == '3')
fprintf(ResultFile, "\tTime for PThread multiplication: %0.10f seconds\n\n", t);
}
}
}
void matrixMultOmp(struct matrix * matrixA, struct matrix * matrixB, struct matrix * resultMatrix){
int i, j, k, l;
double sum = 0;
l = (*matrixA).size;
#pragma omp parallel for private(j,k) firstprivate (sum)
for(i=0; i<=l; i++){
for(j=0; j<=l; j++){
sum = 0;
for(k=0; k<=l; k++){
sum = sum + (*matrixA).matrixPointer[i][k]*(*matrixB).matrixPointer[k][j];
}
(*resultMatrix).matrixPointer[i][j] = sum;
}
}
}
void mm(int thread_id, int numthreads, int dimension, struct matrix* a, struct matrix* b, struct matrix* c){
int i,j,k;
double sum;
i = thread_id;
while (i <= dimension) {
for (j = 0; j <= dimension; j++) {
sum = 0;
for (k = 0; k <= dimension; k++) {
sum = sum + (*a).matrixPointer[i][k] * (*b).matrixPointer[k][j];
}
(*c).matrixPointer[i][j] = sum;
}
i+=numthreads;
}
}
void * worker(void * arg){
struct parameter * p = (struct parameter *) arg;
mm((*p).id, (*p).numthreads, (*p).dimension, (*p).matrixA, (*p).matrixB, (*p).resultMatrix);
pthread_exit((void *) 0);
}
這是帶有時間的輸出:開始計算矩陣/SimpleMatrixA.txt和矩陣/SimpleMatrixB.txt的結果矩陣...矩陣A的大小:6個元素矩陣B的大小:6個元素連續乘法的時間:0.0000030000秒
Starting calculating resultMatrix for matrices/SimpleMatrixA.txt and matrices/SimpleMatrixB.txt ...
Size of matrixA: 6 elements
Size of matrixB: 6 elements
Time for OpenMP multiplication: 0.0002440000 seconds
Starting calculating resultMatrix for matrices/SimpleMatrixA.txt and matrices/SimpleMatrixB.txt ...
Size of matrixA: 6 elements
Size of matrixB: 6 elements
Time for PThread multiplication: 0.0006680000 seconds
Starting calculating resultMatrix for matrices/ShortMatrixA.txt and matrices/ShortMatrixB.txt ...
Size of matrixA: 100 elements
Size of matrixB: 100 elements
Time for sequential multiplication: 0.0075190002 seconds
Starting calculating resultMatrix for matrices/ShortMatrixA.txt and matrices/ShortMatrixB.txt ...
Size of matrixA: 100 elements
Size of matrixB: 100 elements
Time for OpenMP multiplication: 0.0076710000 seconds
Starting calculating resultMatrix for matrices/ShortMatrixA.txt and matrices/ShortMatrixB.txt ...
Size of matrixA: 100 elements
Size of matrixB: 100 elements
Time for PThread multiplication: 0.0068080002 seconds
Starting calculating resultMatrix for matrices/LargeMatrixA.txt and matrices/LargeMatrixB.txt ...
Size of matrixA: 1000 elements
Size of matrixB: 1000 elements
Time for sequential multiplication: 9.6421155930 seconds
Starting calculating resultMatrix for matrices/LargeMatrixA.txt and matrices/LargeMatrixB.txt ...
Size of matrixA: 1000 elements
Size of matrixB: 1000 elements
Time for OpenMP multiplication: 10.5361270905 seconds
Starting calculating resultMatrix for matrices/LargeMatrixA.txt and matrices/LargeMatrixB.txt ...
Size of matrixA: 1000 elements
Size of matrixB: 1000 elements
Time for PThread multiplication: 9.8952226639 seconds
Starting calculating resultMatrix for matrices/HugeMatrixA.txt and matrices/HugeMatrixB.txt ...
Size of matrixA: 5000 elements
Size of matrixB: 5000 elements
Time for sequential multiplication: 1981.1383056641 seconds
Starting calculating resultMatrix for matrices/HugeMatrixA.txt and matrices/HugeMatrixB.txt ...
Size of matrixA: 5000 elements
Size of matrixB: 5000 elements
Time for OpenMP multiplication: 2137.8527832031 seconds
Starting calculating resultMatrix for matrices/HugeMatrixA.txt and matrices/HugeMatrixB.txt ...
Size of matrixA: 5000 elements
Size of matrixB: 5000 elements
Time for PThread multiplication: 1977.5153808594 seconds
正如評論中已經提到的那樣,您的第一個也是主要的問題是使用clock()
。 它返回程序執行的處理器時間。 你所尋找的是在程序執行的掛鍾時間。 在順序代碼中,它們是相同的,但是具有多個核心,這是完全不正確的。 幸運的是,OpenMP已經涵蓋了您:使用函數omp_get_wtime()
代替。
最后,您需要更大的矩陣才能看到多線程帶來的任何好處。 如果創建/管理線程的開銷比線程正在執行的實際工作更昂貴,那么您將永遠不會從並行中看到任何好處。 因此,計時6x6矩陣乘法毫無意義。 我將從1000x1000開始,至少檢查2000x2000和8000x8000。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.