简体   繁体   中英

OpenCL Limit on for loop size?

UPDATE: clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0, LIST_SIZE * sizeof(double), C, 0, NULL, NULL); is returning -5, CL_OUT_OF_RESOURCES . This funciton/call should never return this!

I've started using OpenCL and have come across a problem. If I allow a for loop (in the kernel) to run 10000 times I get all of C to be 0 if I allow the loop to run for 8000 the results are all correct.

I have added waits around the kernel to ensure it completes, thinking I was pulling the data out before completion and have tried both Clwaitforevent and CLFinish. No errors are signalled by any of the calls. I when I used ints the for loop would work at a size of 4000000. Float and doubles have the same problem however floats work at 10000, but not at 20000, when I used the floats I removed #pragma OPENCL EXTENSION cl_khr_fp64 : enable to check that wasn't the problem.

Is this some weird memory thing, I'm I using OpenCL wrong? I realise that in most kernels I woun't be implementing for loops like this, but this seems like an issue. I have also removed __private to see if that was the problem, no change. So is there a limit on the size of for loops in OpenCL kernels? Is is hardware specific? Or is this a bug?

The kernel is a simple kernel, which adds 2 arrays (A+B) together and outputs another (C). In order to get a feel for performance I put a for loop around each calculation to slow it up/increase the number of operations per run through.

The code for the kernel is as follows:

#pragma OPENCL EXTENSION cl_khr_fp64 : enable

__kernel void vector_add(__global double *A, __global double *B, __global double *C)
{

    // Get the index of the current element
    int i = get_global_id(0);

    // Do the operation

    for (__private unsigned int j = 0; j < 10000; j++)
    {
        C[i] = A[i] + B[i];
    }
}

The code I'm running is as follows: (I ensure that the variables are consistent between both pieces of code when I switch between float and double)

#include <stdio.h>
#include <stdlib.h>
#include <iostream>

#ifdef __APPLE__
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif

#define MAX_SOURCE_SIZE (0x100000)

int main(void) {
    // Create the two input vectors
    int i;
    const int LIST_SIZE = 4000000;
    double *A = (double*)malloc(sizeof(double)*LIST_SIZE);
    double *B = (double*)malloc(sizeof(double)*LIST_SIZE);
    for(i = 0; i < LIST_SIZE; i++) {
        A[i] = static_cast<double>(i);
        B[i] = static_cast<double>(LIST_SIZE - i);
    }

    // Load the kernel source code into the array source_str
    FILE *fp;
    char *source_str;
    size_t source_size;

    fp = fopen("vector_add_kernel.cl", "r");
    if (!fp) {
        fprintf(stderr, "Failed to load kernel.\n");
        exit(1);
    }
    source_str = (char*)malloc(MAX_SOURCE_SIZE);
    source_size = fread( source_str, 1, MAX_SOURCE_SIZE, fp);
    fclose( fp );

    // Get platform and device information
    cl_platform_id platform_id = NULL;
    cl_device_id device_id = NULL;
    cl_uint ret_num_devices;
    cl_uint ret_num_platforms;
//    clGetPlatformIDs(1, &platform_id, NULL);
//clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id, ret_num_devices);


    cl_int ret = clGetPlatformIDs(1, &platform_id, NULL);
                if (ret != CL_SUCCESS) {
printf("Error: Failed to get platforms! (%d) \n", ret);
return EXIT_FAILURE;
}
    ret = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id, &ret_num_devices);
            if (ret != CL_SUCCESS) {
printf("Error: Failed to query platforms to get devices! (%d) \n", ret);
return EXIT_FAILURE;
}
/*
    cl_int ret = clGetPlatformIDs(1, &platform_id, NULL);
                if (ret != CL_SUCCESS) {
printf("Error: Failed to get platforms! (%d) \n", ret);
return EXIT_FAILURE;
}
    ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_CPU, 1,
            &device_id, &ret_num_devices);
            if (ret != CL_SUCCESS) {
printf("Error: Failed to query platforms to get devices! (%d) \n", ret);
return EXIT_FAILURE;
}
*/
    // Create an OpenCL context
    cl_context context = clCreateContext( NULL, 1, &device_id, NULL, NULL, &ret);

    // Create a command queue
    cl_command_queue command_queue = clCreateCommandQueue(context, device_id, 0, &ret);

    // Create memory buffers on the device for each vector
    cl_mem a_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
            LIST_SIZE * sizeof(double), NULL, &ret);
    cl_mem b_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
            LIST_SIZE * sizeof(double), NULL, &ret);
    cl_mem c_mem_obj = clCreateBuffer(context, CL_MEM_WRITE_ONLY,
            LIST_SIZE * sizeof(double), NULL, &ret);
            if (ret != CL_SUCCESS) {
printf("Error: Buffer Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Copy the lists A and B to their respective memory buffers
    ret = clEnqueueWriteBuffer(command_queue, a_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(double), A, 0, NULL, NULL);
    ret = clEnqueueWriteBuffer(command_queue, b_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(double), B, 0, NULL, NULL);

    std::cout << "Begin Compile" << "\n";
    // Create a program from the kernel source
    cl_program program = clCreateProgramWithSource(context, 1,
            (const char **)&source_str, (const size_t *)&source_size, &ret);
             if (ret != CL_SUCCESS) {
printf("Error: Program Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Build the program
    ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
    if (ret != CL_SUCCESS) {
printf("Error: ProgramBuild Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Create the OpenCL kernel
    cl_kernel kernel = clCreateKernel(program, "vector_add", &ret);
    if (ret != CL_SUCCESS) {
printf("Error: Kernel Build Fail! (%d) \n", ret);
return EXIT_FAILURE;
}
    std::cout << "End Compile" << "\n";

    std::cout << "Begin Data Move" << "\n";
    // Set the arguments of the kernel
    ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj);
    ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj);
    ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj);
    std::cout << "End Data Move" << "\n";

    // Execute the OpenCL kernel on the list
    size_t global_item_size = LIST_SIZE; // Process the entire lists
    size_t local_item_size = 64; // Process in groups of 64

    std::cout << "Begin Execute" << "\n";
    cl_event event;
    ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
            &global_item_size, &local_item_size, 0, NULL, &event);
            clFinish(command_queue);
            //clWaitForEvents(1, &event);
    std::cout << "End Execute" << "\n";
    if (ret != CL_SUCCESS) {
printf("Error: Execute Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Read the memory buffer C on the device to the local variable C
    std::cout << "Begin Data Move" << "\n";

    double *C = (double*)malloc(sizeof(double)*LIST_SIZE);
    ret = clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(double), C, 0, NULL, NULL);
            if (ret != CL_SUCCESS) {
            printf("Error: Read Fail! (%d) \n", ret);
            return EXIT_FAILURE;
            }
            clFinish(command_queue);
    std::cout << "End Data Move" << "\n";

    std::cout << "Done" << "\n";
    std::cin.get();
    // Display the result to the screen
    for(i = 0; i < LIST_SIZE; i++)
        printf("%f + %f = %f \n", A[i], B[i], C[i]);

    // Clean up
    ret = clFlush(command_queue);
    ret = clFinish(command_queue);
    ret = clReleaseKernel(kernel);
    ret = clReleaseProgram(program);
    ret = clReleaseMemObject(a_mem_obj);
    ret = clReleaseMemObject(b_mem_obj);
    ret = clReleaseMemObject(c_mem_obj);
    ret = clReleaseCommandQueue(command_queue);
    ret = clReleaseContext(context);
    free(A);
    free(B);
    free(C);
    std::cout << "Number of Devices: " << ret_num_devices << "\n";
    std::cin.get();
    return 0;
}

I've had a look on the internet and can't find people with similar problems, this is a concern as it could lead to code that works well till scaled up...

I'm running Ubuntu 14.04, and have a laptop graphics card for a RC520 which I run with bumblebee/optirun. If this bug isn't reproducible on other machines up to a loop size of 4000000 then I will log a bug with bumblebee/optirun.

Cheers

I found the issue, GPUs attached to displays/active VGAs/etc have a Watch Dog Timer that times out after ~5s. This is the case for cards that aren't teslas, which have this functionality to be turned off. Running on a secondary card is a work around. This sucks and needs to be fixed ASAP. It's definitely an NVidia issue, not sure about about AMD, either way, this is terrible.

Workarounds are registry changes in Windows and, in Linux/Ubuntu, altering the X conf and placing:

option "Interactive" "0"

In the gap with the graphics card, however X conf is now not generated in later versions and may have to be manually created. If anyone has a copy and paste console code fix to this that would be great and a better answer.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM