I am using google's books API to fetch information about books based on ISBN number. I am getting thumbnails in response along with other information. The response looks like this:
"imageLinks": {
"smallThumbnail": "http://books.google.com/books/content?id=tEDhAAAAMAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api",
"thumbnail": "http://books.google.com/books/content?id=tEDhAAAAMAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api"
},
I want to download thumbnails on the links above and store them on local file system. How can it be done python?
Use urllib
module
Ex:
import urllib
d = {"imageLinks": {
"smallThumbnail": "http://books.google.com/books/content?id=tEDhAAAAMAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api",
"thumbnail": "http://books.google.com/books/content?id=tEDhAAAAMAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api"
}
}
urllib.urlretrieve(d["imageLinks"]["thumbnail"], "MyThumbNail.jpg")
Python3X
from urllib import request
with open("MyThumbNail.jpg", "wb") as infile:
infile.write(request.urlopen(d["imageLinks"]["thumbnail"]).read())
Send a HTTP request to the corresponding URL, and fetch the content
. Then write the content to a file.
See the example,
imageLinks = {
"smallThumbnail": "http://books.google.com/books/content?id=tEDhAAAAMAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api",
"thumbnail": "http://books.google.com/books/content?id=tEDhAAAAMAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api"
}
import requests
img = requests.get(imageLinks['thumbnail']).content
with open("myfile.jpg", 'wb') as img_file:
img_file.write(img)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.