简体   繁体   English

使用GPU训练模型

[英]Using GPU to train model

I have a little knowledge of using GPU to train model. 我对使用GPU训练模型有一点了解。 I am using K-means from scikit-learn to train my model. 我正在使用scikit-learn中的K-means来训练我的模型。 Since my data is very large, is it possible to train this model using GPU to reduce computation time? 由于我的数据非常大,是否可以使用GPU训练此模型以减少计算时间? or could you please suggest any methods to use GPU power? 还是可以建议使用GPU功能的任何方法?

The other question is if I use TensorFlow to build the K-means as shown in this blog. 另一个问题是我是否使用TensorFlow构建K-means,如本博客所示。

https://blog.altoros.com/using-k-means-clustering-in-tensorflow.html https://blog.altoros.com/using-k-means-clustering-in-tensorflow.html

It will use GPU or not? 是否会使用GPU?

Thank you in advance. 先感谢您。

To check if your GPU supports CUDA: https://developer.nvidia.com/cuda-gpus 要检查您的GPU是否支持CUDA: https : //developer.nvidia.com/cuda-gpus

Scikit-learn hasn't supported CUDA so far. 到目前为止,Scikit-learn还不支持CUDA。 You may want to use TensorFlow: https://www.tensorflow.org/install/install_linux 您可能要使用TensorFlow: https ://www.tensorflow.org/install/install_linux

I hope this helps. 我希望这有帮助。

If you have CUDA enabled GPU with Compute Capability 3.0 or higher and install GPU supported version of Tensorflow, then it will definitely use GPU for training. 如果您启用了具有Compute Capability 3.0或更高版本的CUDA GPU,并且安装了GPU支持的Tensorflow版本,那么它肯定会使用GPU进行培训。

For additions information on NVIDIA requirements to run TensorFlow with GPU support check the following link: 有关NVIDIA要求运行具有GPU支持的TensorFlow的其他信息,请查看以下链接:

https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM