简体   繁体   中英

How to activate google colab gpu using just plain python

I'm new to google colab.

I'm trying to do deep learning there.

I have written a class to create and train a LSTM net using just python - not any specific deep learning library as tensorflow, pytorch, etc.

I thought I was using a gpu because I had chosen the runtime type properly in colab.

During the code execution, however, I was sometimes getting the message to quit gpu mode because I was not making use of it.

So, my question: how can one use google colab gpu, using just plain python, without special ai libraries? Is there something like "decorator code" to put in my original code so that the gpu get activated?

It's just easier to use frameworks like PyTorch or Tensorflow.

If not, you can try pycuda or numba, which are closer to "pure" GPU programming. That's even harder than just using PyTorch.

make sure that Nvidia drivers are up to date also you can install Cuda toolkit(not sure you need in collab)

also numba

you can use conda to install them if you want

example


conda install numba & conda install cudatoolkit
or
pip install numba

We will use the numba.jit decorator for the function we want to compute over the GPU. The decorator has several parameters but we will work with only the target parameter. Target tells the jit to compile codes for which source(“CPU” or “Cuda”). “Cuda” corresponds to GPU. However, if CPU is passed as an argument then the jit tries to optimize the code run faster on CPU and improves the speed too.


from numba import jit, cuda 
import numpy as np

@jit(target ="cuda")                          
def func(a): 
    for i in range(10000000): 
        a[i]+= 1

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM