I'm new to CUDA C and I'm trying to parallelize the following piece of code of the slave_sort function, which you will realize that is already parallel to work with posix threads.. I have the following structs:
typedef struct{
long densities[MAX_RADIX];
long ranks[MAX_RADIX];
char pad[PAGE_SIZE];
}prefix_node;
struct global_memory {
long Index; /* process ID */
struct prefix_node prefix_tree[2 * MAX_PROCESSORS];
} *global;
void slave_sort(){
.
.
.
long *rank_me_mynum;
struct prefix_node* n;
struct prefix_node* r;
struct prefix_node* l;
.
.
MyNum = global->Index;
global->Index++;
n = &(global->prefix_tree[MyNum]);
for (i = 0; i < radix; i++) {
n->densities[i] = key_density[i];
n->ranks[i] = rank_me_mynum[i];
}
offset = MyNum;
level = number_of_processors >> 1;
base = number_of_processors;
while ((offset & 0x1) != 0) {
offset >>= 1;
r = n;
l = n - 1;
index = base + offset;
n = &(global->prefix_tree[index]);
if (offset != (level - 1)) {
for (i = 0; i < radix; i++) {
n->densities[i] = r->densities[i] + l->densities[i];
n->ranks[i] = r->ranks[i] + l->ranks[i];
}
} else {
for (i = 0; i < radix; i++) {
n->densities[i] = r->densities[i] + l->densities[i];
}
}
base += level;
level >>= 1;
}
Mynum is the number of processors. I want after passing the code to kernel, Mynum to be represented by blockIdx.x
.The problem is that i get confused with the structs. I don't know how to pass them in the kernel. Can anyone help me?
Is the following code right?
__global__ void testkernel(prefix_node *prefix_tree, long *dev_rank_me_mynum, long *key_density,long radix)
int i = threadIdx.x + blockIdx.x*blockDimx.x;
prefix_node *n;
prefix_node *l;
prefix_node *r;
long offset;
.
.
.
n = &prefix_tree[blockIdx.x];
if((i%numthreads) == 0){
for(int j=0; j<radix; j++){
n->densities[j] = key_density[j + radix*blockIdx.x];
n->ranks[i] = dev_rank_me_mynum[j + radix*blockIdx.x];
}
.
.
.
}
int main(...){
long *dev_rank_me_mynum;
long *key_density;
prefix_node *prefix_tree;
long radix = 1024;
cudaMalloc((void**)&dev_rank_me_mynum, radix*numblocks*sizeof(long));
cudaMalloc((void**)&key_density, radix*numblocks*sizeof(long));
cudaMalloc((void**)&prefix_tree, numblocks*sizeof(prefix_node));
testkernel<<<numblocks,numthreads>>>(prefix_tree,dev_runk_me_mynum,key_density,radix);
}
The host API code you have posted in your edit looks fine. The prefix_node
structure only contains statically declared arrays, so all that is needed is a single cudaMalloc
call to allocate memory for the kernel to work on. Your method of passing prefix_tree
to the kernel is also fine.
The kernel code, although incomplete and containing a couple of obvious typos, is another story. It seems that your intention is to only have a single thread per block operate on one "node" of the prefix_tree
. That will be terribly inefficient and utilise only a small portion of the GPU's total capacity. For example why do this:
prefix_node *n = &prefix_tree[blockIdx.x];
if((i%numthreads) == 0){
for(int j=0; j<radix; j++){
n->densities[j] = key_density[j + radix*blockIdx.x];
n->ranks[j] = dev_rank_me_mynum[j + radix*blockIdx.x];
}
.
.
.
}
when you could to this:
prefix_node *n = &prefix_tree[blockIdx.x];
for(int j=threadIdx.x; j<radix; j+=blockDim.x){
n->densities[j] = key_density[j + radix*blockIdx.x];
n->ranks[j] = dev_rank_me_mynum[j + radix*blockIdx.x];
}
which coalesces the memory reads and uses as many threads in the block as you choose to run, rather than just one and should be many times faster as a result. So perhaps you should rethink your strategy of directly trying to translate the serial C code you posted into a kernel....
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.