简体   繁体   中英

DeepDream taking too long to render image

I managed to install #DeepDream in my server.

I have duo core and 2gb Ram. but it taking 1min to process a image of size 100kbp.

Any advice ?

Do you run it in a Virtual Machine on Windows or OS X? If so, then it's probably not going to work any faster. In a Virtual Machine (I'm using Docker) you're most of the time not able to use CUDA to render the Images. I have the same problem and I'm going to try it by installing Ubuntu and then install the NVidia drivers for CUDA. At the moment I'm rendering 1080p images which are around 300kb and it takes 15 minutes to do 1 image on an Intel core i7 with 8gb of ram.

Unless you can move to a better workstation/get a GPU, you'll have to do with resizing the image.

img = PIL.Image.open('sky1024px.jpg')
img = np.float32(img.resize( [int(0.5 * s) for s in img.size] ))

Taking 1 minute to process a 100kb image is a sensible turnaround time for #deepdream, and we accept that these renders have an incredibly long baking time. Often, experimental research software will run too slow, hungry for a future of faster computers. That said, there are a couple ways that come to mind about making your setup execute faster.

As a rule of thumb deep learning is hard on both compute and memory resources. A 2gb RAM Core Duo machine is just not a good choice for deep learning. Keep in mind a lot of the people who pioneered this field did much of their research using GTX Titan cards because CPU computation even on xeon servers is prohibitivly slow when training deep learning networks.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM